I almost stopped believing corporations could be moved. Then Anthropic happened (and what it means for the animal protection movement).
Guest blog by Leah Garcés, Animal Advocacy Leader | Founder, Transfarmation Project | Transforming Food Systems & Building First published on her Substack here.
About halfway through some intensive qualitative research — interviewing 50 experts on gaps in the work to end factory farming — I realized I had missed something big.
It was the week that Anthropic hit the headlines, resisting the Pentagon's demand to hand over the goods — the goods being arguably the most powerful AI platform in the world. Anthropic said no. The Pentagon, under Defense Secretary Pete Hegseth, wanted unrestricted use of Claude for any lawful military purpose, including domestic mass surveillance and autonomous weapons. Anthropic refused, because they felt it was not aligned with democratic and American values. They stood up and invoked the most American of values — the First Amendment right to speak out against our government. CEO Dario Amodei said it plainly: "Disagreeing with the government is the most American thing in the world." This was while OpenAI's Sam Altman struck a deal with the Pentagon the same day.
This was a moment of moral resistance, and I was taken aback.
As I've consulted on AI safety and the feasibility of corporate campaigns, I've heard it said that whoever wins the AI race wins everything. I've heard a lot of pessimism that corporations — which generally are not moral agents, and which seek to rule the world — have a low probability of ever doing the right thing. By "right," I mean in this AI case not pushing our species into existential crisis. Not a big deal. :)
But then Anthropic did something incredibly courageous. In a time when billionaires rule, they risked billions — including a $200 million Pentagon contract and their status as the only AI company deployed on classified military networks. For what reason? They referred to values. They referred to morals — about what is right, what is safe, and what is American. This was driven by the people inside the organization: individuals who had everything to gain and everything to lose, but who made a choice.
You can see Anthropic trying on its moral weaponry, its combat gear, to see if it will work. You saw it in the Super Bowl ad, where they ran two darkly comic spots satirizing AI platforms that serve ads — in one, a man asks a chatbot for advice on communicating with his mother, and it seamlessly pivots to pitching a cougar dating site. The tagline: "Ads are coming to AI. But not to Claude." This came just as OpenAI announced it would introduce ads to ChatGPT's free tier. Those of us who have been using Anthropic for some time know it has always taken a different approach; its policies lend themselves to more transparency and protection for the user. Now they are going public with this as a brand.
And a popular movement is rising behind them. Rutger Bregman — Dutch historian and author of Moral Ambition — called Anthropic heroic and urged his followers to switch from ChatGPT to Claude, comparing it to the Montgomery Bus Boycott: targeted, strategic, and potentially movement-defining. We can now see the difference between these companies.
What will power look like in the future? I think it will still be — as it always has been — the courage to stand on moral ground. Fostering that at every level is not so measurable though. And the tactics to create moral ground, the strategies worth investing in, are not so easy. It involves long pipelines starting at the high school and university level. It involves returning to a time of philosophical debate, of being given the chance to experience moral dilemmas, and bearing the consequences.
My research is looking to identify gaps in the farmed animal protection space. It was inspired — and please don't freak out, oh woke ones — by the anti-abortion movement, which performed an autopsy on itself when Obama came to power. They produced a report called the Growth and Opportunity Project. They realized that to win, they needed to build long-term, durable power. This resulted indirectly in new ventures like Turning Point, the late Charlie Kirk’s debate circle that created intellectual conversations at community and campus level. Our movement has optimized for short-term, measurable wins — and that's fine in the early growth phase of a movement. But once the opposition mobilizes, we have no power to protect our gains. That's where I think we are now. We need to build durable power. I created a list of what I thought the priority projects should be to do that. I identified leaders, key funders, key thinkers, and even outside-movement leaders — from immigration reform to Teach For America — to interview and question. Then halfway through, someone said: yes, I agree, but how the hell do we know what power will look like in five years, let alone thirty?
Watching Anthropic flex its moral muscles — and seeing a movement begin to form behind them — caused me to pause. I'd love to know if Anthropic predicted this moment. Did they have a conversation in advance and say, when asked, we will say no? Were they ready? Either way, the company had already set itself on a slightly different path, one where morality and ethics would play a role. It gives me hope, honestly — not just for our future with regard to AI, but for the future of the farmed animal protection movement.
It's very easy to throw your hands up and say: we are living in an uber-capitalist moment. Companies aren't worth our time. We have been disheartened to see the recent backsliding of corporate commitments to improve animal welfare. We worry that they will always maximize for profit; that will always be the driving force. And while that is the default, I believe that companies are made up of people. And people are basically good (try not to roll your eyes - they are!) . Companies can be swayed, even transformed, by public sentiment. We just have to maximize the levers and environment that help morality to flourish. That, for me, is the question I'm circling around now, the one I missed. What led to Anthropics leaders having the courage to take the stance they did in the world we live in? In a shifting world, where roles and power and truth seem to be spinning out of control, how do we assert a strategy worth trying that increases moral bravery?

Comments
Post a Comment