Standing Ground

What happens when an AI company and its CEO stick to their principles

Standing Ground

Dario Amodei is not a timid man. He split from OpenAI to form Anthropic with his sister Daniela, took some of the company's best researchers with him, and built a competitor on the explicit premise that the field's most celebrated company wasn't taking its own dangers seriously enough. He's been saying uncomfortable things in public for years. But he typically says them in long essays with footnotes, so most people don't pay close attention.

One of those essays, published in January, opens with a scene from the film Contact. An astronomer who has just detected alien life is asked what single question she'd ask them. She answers: 'How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?'

Humanity, Dario argues, is about to be handed almost unimaginable power, and it is deeply unclear whether our social and political systems are mature enough to wield it. The risks he maps are not science fiction: autonomous weapons that kill without human oversight, AI-enabled mass surveillance that would make any historical authoritarian blush, systems capable of walking a determined person through the creation of a bioweapon. They are, uncomfortably, the near-term implications of technology his own company is building. He writes about them with the calm, precise urgency of someone who has thought about this for years.

Last Friday, he stopped writing about the risks and started living inside one.


Dario walked out of OpenAI in 2021 with several of the company's most senior researchers because he believed the field was moving faster than its ability to understand what it was building. Sam Altman, the CEO he left behind, built ChatGPT into a household name and, along the way, quietly removed OpenAI's ban on military use. Dario made a different bet. He founded Anthropic on a specific thesis: that safety and capability aren't in tension, that the most rigorously trustworthy AI would also be the most genuinely useful AI, and that someone needed to prove it rather than just say it.

For three years, proving it meant doing the quiet work. Anthropic became the first AI company on the Pentagon's classified networks, through a partnership with Palantir. The first at the National Laboratories, the first to provide custom models for national security customers. They walked away from hundreds of millions in revenue by cutting ties with Chinese military-linked firms, unprompted. Their $200 million Pentagon contract, awarded last July, came on the back of a real track record. The kind you earn over years, not the kind you claim in a press release.

This was a company that wanted to work with the military, and did work with the military, right up until the moment they were asked for something they'd decided they couldn't responsibly provide.

The Pentagon wanted two things removed from the contract: the restriction on using Claude for mass domestic surveillance of American citizens, and the restriction on powering fully autonomous weapons. They wanted unrestricted access for "all lawful purposes." Months of negotiation went nowhere. A 5:01 PM Friday deadline was set, the classic bureaucratic timestamp for a decision no one wants scrutinised over the weekend.

On Thursday evening, the night before the deadline, Dario published his reasoning in full. On autonomous weapons, his argument is technical: frontier AI isn't reliable enough to take human judgment out of the loop for lethal decisions. On surveillance, it's about legal lag: existing law wasn't written for systems that can silently assemble a comprehensive portrait of any citizen's life from scattered public data, at scale, in seconds. He explained it clearly, signed his name, and waited.

The next day, the hammer fell.


Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk to national security," a label normally reserved for foreign adversaries like Huawei. Undersecretary Emil Michael called Amodei a liar with a God-complex. The GSA scrubbed Anthropic from the federal government's AI procurement platform. Federal agencies were given six months to phase out all Claude deployments.

And then Elon Musk, whose own AI company xAI was approved for classified government settings that same week — the very week Anthropic was designated an enemy — posted on X that "Anthropic hates Western Civilization."

The man whose company stood to directly inherit Anthropic's government contracts declared that refusing to enable mass surveillance and autonomous weapons was an act of civilisational betrayal. In comic books, the villain's motivations are usually ambiguous. Not here.


What happened next surprised almost everyone. Probably including Dario.

Sam Altman told his staff that OpenAI holds the same red lines on mass surveillance and autonomous weapons. He went on CNBC and said: "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety." He said publicly that the Defense Production Act should not be threatened against AI companies.

Over a hundred Google employees sent a letter to their chief scientist demanding equivalent restrictions on Gemini. Staff at Microsoft and Amazon made similar demands. Employees from Google and OpenAI built notdivided.org as a public act of solidarity. Retired Air Force General Jack Shanahan, who once led the Pentagon's own AI ethics initiatives, wrote that Anthropic's red lines were "reasonable" and that current AI is "not ready for prime time in national security settings."

The AI industry had apparently been waiting for someone to draw a line. It turned out far more people were willing to stand behind it than anyone expected.


Then came the plot twist.

Hours after the blacklisting, Altman announced that OpenAI had reached a new agreement with the Pentagon to deploy its models on classified networks. The terms: prohibitions on domestic mass surveillance, human responsibility for the use of force, no fully autonomous weapons. The exact restrictions Anthropic had been designated a national security risk for insisting on. Altman publicly asked the Pentagon to offer the same terms to every AI company.

On Friday afternoon, Anthropic were the enemies of Western Civilization. By Friday night, their terms were the agreed standard.

Anthropic announced they would challenge the supply chain designation in court. Their statement was short: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."


The AI industry does not have a great track record when it comes to ethics. xAI has no meaningful safety commitments on record. OpenAI removed its military use ban quietly in 2024, then rebuilt guardrails only after Anthropic made the cost of having none publicly visible. Google dissolved its AI ethics board years ago. Across much of the industry, safety has functioned as reputation management, and principles as a starting position in a negotiation.

Dario has been writing about this drift for years. He writes about a world where these risks are near-term, not hypothetical, and argues they become far less preventable if the companies building the technology treat their principles as a negotiating position.

Last Friday, his own principles were the negotiating position on the table. He declined to negotiate them.

What that refusal produced is something almost no lab in this industry could manufacture on purpose: a company that is now, demonstrably and publicly, the one that put principles ahead of profits. In a world where benchmarks tell you nothing about values and every lab's safety page reads like every other lab's safety page, Anthropic has given anyone wrestling with unease about AI a company they can feel good about supporting.


The immediate picture for Anthropic is complicated. Six months is a long time for enterprise customers to sit with uncertainty, and Altman was shrewd enough to secure a classified contract with the same red lines Anthropic fought for, paying none of the price. The lawsuit will probably be lengthy. There are real costs still to come.

And yet something has changed that can't be walked back. Anthropic paid for a standard the whole industry now gets to claim. More than that, they've made the choice of which AI company to support a meaningful one, at a moment when most people assumed all those choices were equal.

Dario Amodei is right about the technological adolescence, and that how we navigate it will shape what comes next. That navigation isn't only happening in boardrooms and classified military networks. It happens in the everyday choices individuals and organisations make about which AI they use and whose business they give their money to. There are companies that stand their ground even when the costs are great, while others quietly negotiate it away. Now we know the difference.

The astronomer in Sagan's story wanted to know how we survived. We don't have the answer yet. But survival probably looks less like a single breakthrough and more like a long series of moments where someone has every reason to move and choses not to. Standing their ground, even when the ground shakes.


This post was written with the assistance of Claude, Anthropic's AI. The hero image was generated using Midjourney.