OpenAI CEO Sam Altman says he can’t speak for the Pentagon, but ‘Anthropic seemed more focused on…

128943022.jpg


OpenAI CEO Sam Altman says he can't speak for the Pentagon, but 'Anthropic seemed more focused on...
OpenAI CEO Sam Altman believes Anthropic’s stricter contract demands led to their Pentagon deal falling apart. While OpenAI secured a contract, Anthropic was labeled a national security risk. Altman highlighted OpenAI’s reliance on technical safeguards over contractual prohibitions for safety in government AI use, aiming to de-escalate industry tensions.

OpenAI CEO Sam Altman has offered his most candid take yet on why his company managed to ink a deal with the Pentagon while rival Anthropic could not—suggesting that Anthropic’s insistence on tighter contractual controls may have been the breaking point. In an Ask Me Anything session on X on Saturday night, Altman responded to a user asking why the Department of War (DoW) went with OpenAI over Anthropic. While he said he couldn’t speak for the Pentagon, he didn’t hold back on his read of the situation.“I think Anthropic may have wanted more operational control than we did,” Altman wrote, adding that Anthropic appeared “more focused on specific prohibitions in the contract, rather than citing applicable laws.”

‘Biggest Mistake Young People Make…’: OpenAI CEO Sam Altman Shares Blunt Take On AI At IIT Delhi

Sam Altman says both sides were close before things fell apart

Altman noted that Anthropic and the Pentagon were reportedly very close to reaching an agreement before negotiations collapsed under pressure. “I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here,” he said.The breakdown has had serious consequences. On Friday, Defense Secretary Pete Hegseth declared Anthropic a “supply chain risk to national security,” effectively blacklisting the company from military contracts. President Trump went further, ordering all federal agencies to stop using Anthropic’s products and calling the company “radical left” on Truth Social.Anthropic, for its part, has said it will challenge the designation in court and that no amount of “intimidation” will shift its stance on mass domestic surveillance and fully autonomous weapons—the two red lines it refused to drop.

OpenAI says its safety approach relies on tech, not just contract language

Altman drew a clear distinction between how the two companies approach safety in government deployments. OpenAI, he explained, favours a “layered approach”—a safety stack it fully controls, cloud-only deployment, cleared forward-deployed engineers, and alignment researchers in the loop.“Although documents are also important, I’d clearly rather rely on technical safeguards if I only had to pick one,” Altman wrote.OpenAI’s contract with the Pentagon prohibits the use of its technology for “unconstrained monitoring” of Americans’ private information and bars it from independently directing autonomous weapons. The company also added a third red line that Anthropic did not publicly emphasise: no automated high-stakes decision-making, such as social credit-style systems.

The deal was rushed, and Altman knows the optics aren’t great

Altman was remarkably upfront about the speed at which the deal came together, calling it “rushed” and acknowledging that the “optics don’t look good.” He said the agreement was partly motivated by a desire to de-escalate tensions between the Pentagon and the broader AI industry.As part of the deal, OpenAI says it pushed for similar terms to be offered to all AI labs—including Anthropic. The company also publicly stated that Anthropic should not be designated a supply chain risk.Whether that call for de-escalation lands remains to be seen. For now, the AI industry’s two biggest safety-focused labs find themselves on opposite sides of one of the most consequential government contracts in tech history.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *