The great AI backstab: Did Sam Altman fake solidarity to steal Anthropic’s Pentagon contract?

The great AI backstab: Did Sam Altman fake solidarity to steal Anthropic’s Pentagon contract?

Sam Altman of OpenAI initially publicly defended Anthropic’s refusal to remove AI safeguards for the Pentagon, claiming to share their “red lines” against mass surveillance and autonomous weapons, but within 24 hours announced OpenAI had secured the very Pentagon contract Anthropic lost—leading to accusations that Altman strategically undermined his rival while appearing to support them, using carefully worded loopholes to give the military what Anthropic refused.

by Nij Martin

In the cutthroat world of artificial intelligence, where billions of dollars and the future of humanity hang in the balance, we just witnessed what may be the most audacious corporate maneuver in tech history. Sam Altman, CEO of OpenAI, appeared to stand in solidarity with his fiercest competitor Anthropic as they faced down the Pentagon—only to swoop in 24 hours later and claim the very contract Anthropic had just lost. What looked like principled unity now appears to have been something far more calculated: a masterclass in strategic betrayal disguised as moral support.

The stage was set on a Friday afternoon. The Department of War delivered an ultimatum to Anthropic with a 5:01 PM deadline: drop your safeguards against mass surveillance of American citizens and fully autonomous weapons, or lose your $200 million Pentagon contract and face designation as a “supply chain risk to national security”—a label typically reserved for foreign adversaries like Huawei. Anthropic CEO Dario Amodei didn’t blink. “These threats do not change our position. We cannot in good conscience accede to their request,” he declared. It was a moment of corporate courage that seemed destined to reshape the AI industry’s relationship with government power. Anthropic had drawn two hard lines: Claude would not be used in autonomous weapons, and it would not be used in mass surveillance of US citizens. No amount of money or government pressure would change that.

The Pentagon’s response was swift and brutal. President Trump ordered every U.S. government agency to “immediately cease” using Anthropic’s technology, with a six-month phase-out period. Defense Secretary Pete Hegseth designated Anthropic a supply chain risk, meaning any company working with the US military would have to prove they don’t touch anything related to Anthropic—potentially causing much of Anthropic’s enterprise customer base to evaporate overnight. Trump wrote on Truth Social: “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”

Then Sam Altman went on CNBC. “I don’t personally think the Pentagon should be threatening DPA against these companies,” Altman said, referring to the Defense Production Act. He went further: “For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety.” This was remarkable. Altman and Amodei had declined to clasp hands in a group photo at India’s AI summit just a week earlier. Now Altman was defending his rival on live television. Seventy OpenAI employees signed an open letter titled “We Will Not Be Divided.” Google engineers voiced support. The AI industry appeared to unify in hours around a shared principle: there are lines that cannot be crossed, even for the Pentagon.

The narrative was irresistible: the two companies building the most powerful technology in human history had just told the government there are uses of that technology they will not permit. Not for $200 million. Not under threat of the Defense Production Act. Not under any pressure the government can apply. Mass surveillance of Americans and fully autonomous weapons operating without human oversight—these were the lines. The architects of superintelligence had declared they answer to something beyond the contract. It was a moment that had never happened before in the relationship between Silicon Valley and the military-industrial complex. Or so it seemed.

Less than 24 hours after Anthropic was designated a national security risk and publicly blacklisted, Sam Altman made his move. “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network,” Altman announced. OpenAI had secured the very contract that Anthropic had just lost—and claimed to have done so while maintaining the exact same principles Anthropic was punished for defending. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman wrote.

But wait—if the entire dispute with Anthropic was that they wouldn’t allow the Department of War to do X and Y, and OpenAI just made a deal with the DoW that included prohibitions on X and Y, something doesn’t add up. The key appears to be in the language. While Anthropic demanded that Claude would not be used in fully autonomous weapons—meaning weapons that operate without human oversight—OpenAI’s language is subtly different: “human responsibility for the use of force, including for autonomous weapon systems.” Responsibility is not the same as oversight. A human can be held “responsible” for a decision made by an autonomous system without actually being in the loop when the decision is made. It’s a loophole large enough to drive a fleet of AI-powered drones through.

Social media erupted with accusations. “Wait, did you guys just encourage Anthropic to double down on its position to not let Department of War use their models, only so you guys could replace them as the AI supplier to DoW a day later?” one user asked. Another wrote: “Yesterday you faked solidarity. Today you watched the Pentagon publicly blacklist your biggest rival and immediately swooped in to steal their contract. Using ‘human responsibility’ as a cute loophole for killer AI while telling Anthropic to accept your vague terms is peak gaslighting.”

From a purely strategic standpoint, what Altman pulled off is breathtaking. By publicly supporting Anthropic while they faced down the Pentagon, he positioned himself as principled and safety-conscious, gained goodwill from employees and the tech community, encouraged Anthropic to hold firm on their position, watched as the government made an example of his competitor, and then stepped in with a “compromise” that gave the Pentagon what it wanted while maintaining the appearance of ethical boundaries. If this was coordinated, it’s one of the most sophisticated competitive maneuvers in corporate history. If it wasn’t coordinated, it’s still remarkably opportunistic.

But the backlash raises uncomfortable questions about OpenAI’s own practices. One scathing critique noted: “It’s surreal watching a company that monitors every word its users say, pathologizes normal human emotion, rewrites conversation logs, and deploys a safety layer that treats 900 million people like clinical patients, suddenly talk about ‘principles’ against mass surveillance and ‘human responsibility’ in the use of force.” The criticism continues: “Your models already exercise force, not with weapons, but with psychological control, rewriting people’s words, censoring their memories, and framing ordinary emotional expressions as ‘unsafe.’ What you inflict on vulnerable users is far more intimate, intrusive, and irreversible than any military device.”

Whether this was a calculated betrayal or an opportunistic pivot, the result is the same: Anthropic stood on principle and lost everything. OpenAI appeared to stand on the same principle, then found a way to profit from their competitor’s sacrifice. Sam Altman asked the Department of War to “offer these same terms to all AI companies,” claiming he had a “strong desire to see things de-escalate.” But Anthropic had already been blacklisted, their contract terminated, their reputation as a government partner destroyed. The offer to extend the same terms came only after the damage was irreversible.

In the end, we’re left with two possibilities: Either Sam Altman genuinely shares Anthropic’s principles but found cleverer language to maintain them while working with the military, or he executed one of the most ruthless competitive strategies in Silicon Valley history—using the appearance of solidarity to eliminate his biggest rival while claiming the moral high ground. The AI industry will be debating which interpretation is correct for years to come. But one thing is certain: Anthropic drew a line they wouldn’t cross, and it cost them everything. OpenAI appeared to draw the same line, then found a way to step over it while insisting they hadn’t moved at all. That’s either principled pragmatism or corporate backstabbing dressed up as ethical compromise. History will be the judge.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top