Walking It Back
What happens when an AI leader tries to please everyone
On Thursday night last week, Sam Altman sent a memo to OpenAI employees. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons," he wrote. "These are our main red lines." He told staff this was "a case where it's important to me that we do the right thing, not the easy thing that looks strong but is disingenuous."
By Friday afternoon, OpenAI had signed a deal with the Pentagon under an "all lawful purposes" standard, stepping into the contract Anthropic had just refused. By Saturday, Altman was on X admitting "the optics don't look good." By Monday, he was calling his own deal "opportunistic and sloppy" and announcing revisions.
If you've read my previous story, Standing Ground, you know the backstory: Anthropic refused to remove two restrictions from their Pentagon contract, got blacklisted as a supply chain risk, and watched OpenAI announce a replacement deal within hours. That post was about a company—and its leader—that said no. This one is about the company that said yes, and the leader behind that decision.
In Karen Hao's book Empire of AI, a former colleague described what it's like to work with Sam Altman: "He's so good at adjusting to what you say, and you really feel like you're making progress with him. And then you realize over time that you're actually just running in place."
I was reminded of that quote as this story unfolded. On Thursday, Altman aligned himself with Anthropic's position. On Friday, he took the deal Anthropic refused. On Saturday, he told X users the deal was rushed but defended it as de-escalation. On Monday, he shared an internal memo publicly, announced contract amendments to add language about surveillance protections, and said the government should reverse Anthropic's blacklisting. Each statement was perfectly calibrated for its audience, and each one slightly contradicted the last.
This pattern has a history. In 2023, Altman told the Senate that regulatory intervention on AI would be "critical." By 2025, he was sitting in front of another committee agreeing with Ted Cruz about the dangers of overregulation. OpenAI promised to commit 20% of the company’s compute to AI safety through a team called Superalignment. That team was disbanded a year later. Altman approached Scarlett Johansson to give her voice to ChatGPT. She declined, but the new version sounded remarkably like her anyway, and OpenAI had to remove it after a public outcry and threat of legal action. AGI was going to "elevate humanity" and arrive by 2025. Then it became "not a super useful term." The pattern is consistent enough to be a management style: tell each room what it needs to hear, then move fast enough that no one can hold you to yesterday's version of the story.
The Pentagon deal brought this pattern into full public view for the first time. Altman struggled to manage the narrative across multiple audiences simultaneously because the audiences were all watching the same screen.
The day after Anthropic was blacklisted, CEO Dario Amodei sent a 1,600-word memo to his employees. The Information obtained the full document and TechCrunch and Axios confirmed key excerpts. Amodei called OpenAI's safeguards "safety theater." He described their public messaging as "mendacious" and "straight up lies." He said Altman was falsely "presenting himself as a peacemaker and dealmaker."
Amodei is obviously not a neutral observer here. He runs the rival company that just lost a $200 million contract. But the specific details in the memo are worth examining regardless.
According to Amodei, the Pentagon had agreed to nearly all of Anthropic's terms. The entire negotiation came down to five words: a prohibition on "analysis of bulk acquired data." Amodei called it "the single line in the contract that exactly matched this scenario we were most worried about." The Pentagon asked Anthropic to delete it. Anthropic refused. The deal died.
OpenAI's contract, meanwhile, operates under the "all lawful purposes" standard. Legal experts immediately questioned whether the language Altman added after the backlash actually imposed any restrictions beyond what existing law already requires. As Representative Sam Liccardo pointed out during a congressional committee session: "There is only one problem with the Pentagon's approach. There is no law. The law is years behind the technology."
Even some of OpenAI's own people weren't convinced. Research scientist Aidan McLaughlin posted on X: "I personally don't think this deal was worth it." Leo Gao, who works on AI alignment at OpenAI, criticized the contract as "window dressing" over the "all lawful purposes" baseline.
What makes this moment different from Altman's previous reversals is where the pushback came from. Not from board members or researchers or journalists, but from consumers. ChatGPT uninstalls surged 295% in a single day. One-star reviews jumped 775%. Claude rose to number one on the US App Store for the first time, and has held onto that position all week. A boycott campaign called QuitGPT claimed over 2.5 million supporters and organized a protest outside OpenAI's San Francisco offices. Chalk graffiti appeared on the sidewalk: Where are your redlines?
For the first time in the brief history of the AI industry, users voted with their wallets over an ethical question rather than a product one. The AI company that said no gained market share. The one that said yes lost it.
While all of this was playing out, something much bigger was happening. Within hours of the OpenAI announcement on Friday, the United States and Israel launched strikes against Iran. Nearly 900 rocket, bomb, and missile strikes were made in the first 12 hours. Military analysts noted that operational tempo would have been impossible in earlier conflicts; AI compressed planning cycles that used to take days into hours. Claude was reportedly used in intelligence gathering and target selection for the strikes, even as the president had just ordered agencies to stop using Anthropic's technology.
The foreground drama about whether AI should power military operations was unfolding simultaneously with a massive military operation that AI was powering. Whether anyone in those negotiations knew what was coming is an open question. The timing, at minimum, reframes the urgency behind the Pentagon's demand for unrestricted access.
On Tuesday, Altman held an all-hands meeting at OpenAI. After days of public remorse about rushing the deal, what he told employees behind closed doors had a very different tone. The Pentagon, he said, had made clear that OpenAI doesn't "get to make operational decisions." Then he put it more bluntly: "Maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that."
In public, he was admitting mistakes and promising amendments. In private, he was telling his team to accept the terms.
That same day, Amodei was at the Morgan Stanley conference telling investors Anthropic is still talking to the Pentagon "to try to deescalate the situation." The Financial Times reported renewed discussions with Under-Secretary of Defense Emil Michael. Anthropic executives had reportedly expressed regret over how the standoff played out in the media. The two sides had been making progress, according to Axios, though it was unclear how much that leaked "straight up lies" memo might derail the talks.
On Thursday, we found out. The Pentagon officially designated Anthropic a supply chain risk, effective immediately. It is the first time an American company has ever received the label. Defense contractors must now certify they don't use Claude in their Pentagon work. Lockheed Martin said it would follow the directive and look to other providers. Amodei confirmed Anthropic will challenge the designation in court. Former CIA director Michael Hayden and retired military leaders called it a "category error," arguing the designation was designed to protect against foreign adversaries, not to punish American companies for declining to remove safeguards.
Meanwhile the Pentagon has been using Claude to support operations in Iran.
Altman is a dealmaker. He leads with calculation, reads every room, and adjusts accordingly. That instinct built the most valuable startup in history. But it also produced a week where his own employees publicly called his government contract "window dressing" and 1.5 million users committed to canceling his product.
Amodei is a researcher. He leads with conviction, and his memo to employees reads like someone processing a betrayal rather than managing a news cycle. That instinct earned Anthropic an extraordinary wave of public goodwill. It also made his company the first in American history to be labeled a supply chain risk by its own government.
The leader who calculates loses trust, while the leader who sticks to his principles risks the business. One CEO spent four days walking back what he'd said. The other is heading to court over what he meant. Sometimes the right choice only becomes obvious long after it's been made.
This is a follow-up to Standing Ground. The story, as it stands, is still developing...
This post was written with the assistance of Claude, Anthropic's AI. The cover image generated with Midjourney.