OpenAI CEO Sam Altman: To ‘cool’ down things between Anthropic and Pentagon so that .. ., says OpenAI CEO Sam Altman in AMA on why the company closed the deal same day Pentagon banned Anthropic |
OpenAI CEO Sam Altman has now admitted that the company’s decision to get into a deal with Pentagon was ‘rushed’ but insisted it was necessary to de-escalate the growing tensions between the US military and rival AI firm Anthropic. The agreement announced last week allows OpenAI’s models to be used in classified military networks just hours after Anthropic rejected a similar deal and was tabled a ‘supply-chain risk’ by the Trump administration. According to a report by Fortune, Altman acknowledged during an AMA sessions on X (formerly known as Twitter) that the optics of the deal ‘don’t look good’, but he believes that OpenAI acted quickly to prevent a broader confrontation that could have harmed the AI industry. “If we are right and this does lead to a de-escalation between the DOW and the industry, we will look like geniuses… If not, we will continue to be characterized as rushed and uncareful,” he said.Altman also added that a constructive relationship between government and AI companies is ‘critical over the next couple of years’, and critiqued the Pentagon’s designation of Anthropic as a supply-chain risk, calling it “a very bad decision” that he hopes will be reversed.
Employee and public backlash
The deal between ChatGPT-maker and Pentagon sparked backlash among OpenAI employees, many of whom had signed a letter supporting Anthropic’s refusal to accept Pentagon terms. The protesters also chalked graffiti outside OpenAI’s San Francisco offices condemning the move, while Anthropic’s headquarters were marked with messages praising its stance. OpenAI staffer Leo Gao publicly questioned whether the contract provided real safeguards, criticizing the “all lawful purposes” clause as little more than “window dressing.”
Safeguards and legal language
OpenAI mentioned that its contracts binds the Pentagon to existing U.S. laws and Department of War policies which limit the surveillance of citizens and regulate autonomous weapons. Katrina Mulligan, OpenAI’s head of national security partnerships, argued that codifying these laws in the contract provides stronger protections. The company also promised technical safeguards, including classifiers to block prompts that violate redlines and fine-tuning models to resist unsafe instructions.Legal experts, however, warned that Pentagon policies can change at will, raising doubts about how durable the safeguards are.The critics has also questioned how OpenAI defines “mass surveillance,” noting that U.S. intelligence agencies already purchase commercially available datasets, such as cell phone location data, that could be used to monitor citizens at scale. Mulligan said the contract prohibits mass domestic surveillance but admitted OpenAI cannot prevent agencies from buying such data independently.
Clash of philosophies
OpenAI executives argued that their layered safeguards combining technical systems, deployment limits, and expert oversight are more robust than Anthropic’s reliance on contractual language. Boaz Barak, an OpenAI researcher, said Anthropic had “unrealistic expectations” about contract terms.Altman also raised a broader question: who should decide how AI is used? He said he was “terrified of a world where AI companies act like they have more power than the government,” but equally fearful of a government that normalizes mass surveillance.