OpenAI has rewritten the playbook on how Silicon Valley works with the U.S. military, unveiling detailed terms of its deal with the Pentagon and insisting that the agreement includes “more guardrails than any previous agreement for classified AI deployments.” The company is now leaning on precise legal language, technical architecture, and public assurances from its top executives to defend a partnership that CEO Sam Altman openly concedes was “definitely rushed” and whose “optics don’t look good.”
A Rushed Deal After Anthropic Fallout
The agreement, finalized “yesterday” according to OpenAI’s own description, came together in the immediate aftermath of a dramatic breakdown between rival Anthropic and the Department of War, the new name for the Pentagon under the Trump administration. After those negotiations failed on Friday, President Donald Trump ordered federal agencies to stop using Anthropic’s systems after a six‑month transition, while Defense Secretary Pete Hegseth designated the company a “supply‑chain risk,” effectively freezing a previously reported multihundred‑million‑dollar relationship.
Into that vacuum stepped OpenAI. “Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies,” the company wrote in a detailed blog post explaining the contours of the deal.
On social media, Altman acknowledged both the speed and the backlash. In a thread on X, he told critics that the deal was “definitely rushed” and that “the optics don’t look good,” but insisted that the motivation was to calm an escalating conflict between Washington and frontier AI labs. Explaining why OpenAI still moved forward, Altman said: “We really wanted to de‑escalate things, and we thought the deal on offer was good.”
He went further, arguing that the outcome will be seen either as a bold act of industry leadership or a strategic misstep. “If we are right and this does lead to a de‑escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry,” Altman said. “If not, we will continue to be characterized as […] rushed and uncareful.”
Three Non‑Negotiable Red Lines
Central to OpenAI’s defense is a set of explicit “red lines” that it says apply to all of its national security work. “We have three main redlines that guide our work with the DoW,” the company wrote. These are:
● “No use of OpenAI technology for mass domestic surveillance.”
● “No use of OpenAI technology to direct autonomous weapons systems.”
● “No use of OpenAI technology for high‑stakes automated decisions (e.g. systems such as ‘social credit’).”
OpenAI stresses that other AI labs “have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments,” whereas its own agreement “better protects against unacceptable use.”
“In our agreement, we protect our redlines through a more expansive, multi‑layered approach,” the company wrote. “We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.”
Cloud‑Only Architecture and Safety Stack
A major pillar of OpenAI’s argument is not just what the contract says, but how the technology is deployed. Under a section titled “Deployment architecture,” the company explained that this is “a cloud‑only deployment, with a safety stack that we run that includes these principles and others.”
“We are not providing the DoW with ‘guardrails off’ or non‑safety trained models, nor are we deploying our models on edge devices (where there could be a possibility of usage for autonomous lethal weapons),” the post states. OpenAI says this setup “will enable us to independently verify that these redlines are not crossed, including running and updating classifiers.”
Katrina Mulligan, OpenAI’s head of national security partnerships, took that point further in a separate LinkedIn post, pushing back on the idea that a single contract clause is all that stands between American citizens and military misuse of AI.
Much of the debate, she argued, assumes “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single usage policy provision in a single contract with the Department of War.”
“That’s not how any of this works,” Mulligan wrote, stressing that “deployment architecture matters more than contract language […] By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware.”
What the Contract Actually Says
OpenAI went a step rarely seen in classified‑adjacent tech contracts: it published key passages of the agreement itself. The contract declares that “The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well‑established safety and oversight protocols.”
On weapons and high‑stakes decision‑making, it continues: “The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high‑stakes decisions that require approval by a human decisionmaker under the same authorities.”
The document explicitly cites DoD Directive 3000.09, noting that “any use of AI in autonomous and semi‑autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.”
On surveillance and intelligence, it states that “any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.” The agreement adds that “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities,” and that it “shall also not be used for domestic law‑enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.”
OpenAI emphasises that it keeps “cleared forward‑deployed OpenAI engineers helping the government, with cleared safety and alignment researchers in the loop,” framing this as an additional layer of human oversight on top of technical and legal constraints.
Answering the Toughest Questions
In its blog, OpenAI structured much of the explanation as a Q&A, directly addressing criticism and comparisons with Anthropic.
On why it agreed to the deal at all, the company wrote: “First, we think the US military absolutely needs strong AI models to support their mission especially in the face of growing threats from potential adversaries who are increasingly integrating AI technologies into their systems.”
“We originally did not jump into a contract for classified deployment, as we did not feel that our safeguards and systems were ready, and have been working hard to ensure that a classified deployment can happen with safeguards to ensure that redlines are not crossed,” it added.
The second reason, OpenAI said, was to calm tensions between Washington and the AI sector. “We also wanted to de‑escalate things between DoW and the US AI labs,” the company wrote. “A good future is going to require real and deep collaboration between the government and the AI labs. As part of our deal here, we asked that the same terms be made available to all AI labs, and specifically that the government would try to resolve things with Anthropic.”
On the sensitive question of whether it had simply accepted a contract that Anthropic refused, OpenAI answered: “Based on what we know, we believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic’s original contract.”
“We think our redlines are more enforceable here because deployment is limited to cloud‑only (not at the edge), keeps our safety stack working in the way we think is best, and keeps cleared OpenAI personnel in the loop,” the company said. “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it.”
The company also took a clear stance on the government’s move against its rival. Asked whether Anthropic should be designated a “supply chain risk,” OpenAI responded: “No, and we have made our position on this clear to the government.”
Explicit “No” on Autonomous Weapons and Mass Surveillance
OpenAI attempted to put the most controversial fears to rest with direct yes‑or‑no answers. To the question “Will this deal enable the Department of War to use OpenAI models to power autonomous weapons?”, the company’s answer was blunt: “No.”
“Based on our safety stack, our cloud‑only deployment, the contract language, and existing laws, regulation and policy, we are confident that this cannot happen,” it wrote. “We will also have OpenAI personnel in the loop for additional assurance.”
On mass domestic surveillance, the response was similarly categorical. “Will this deal enable the Department of War to use OpenAI models to conduct mass surveillance on U.S. persons?” the company asked. The answer: “No.”
“Based on our safety stack, the contract language, and existing laws that heavily restrict DoW from domestic surveillance, we are confident that this cannot happen,” OpenAI said. “We will also have OpenAI personnel in the loop for additional assurance.”
The firm also insisted it is not being forced to compromise on safety features. “Do you have to deploy models without a safety stack?” the blog asks. “No, we retain full control over the safety stack we deploy and will not deploy without safety guardrails. In addition, our safety and alignment researchers will be in the loop and help improve systems over time.”
“We know that other AI labs have reduced model guardrails and relied on usage policies as the primary safeguard, but we think our layered approach better protects against unacceptable use,” the company added.
Legal Locks on Future Policy Changes
Another concern raised by critics is what happens if U.S. law or Department of War policies change in the future. OpenAI says the contract is designed specifically to prevent its systems being swept into a more permissive regime.
“What if the government just changes the law or existing DoW policies?” the company wrote. “Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.”
As with any contract, OpenAI notes, “we could terminate it if the counterparty violates the terms. We don’t expect that to happen.”
Critics Warn of Loopholes
Despite the detailed assurances, some outside observers remain unconvinced. Shortly after OpenAI published its blog, Techdirt editor Mike Masnick argued that the agreement “absolutely does allow for domestic surveillance” because it ties private‑data handling to Executive Order 12333 alongside other laws. He described that order as “how the NSA hides its domestic surveillance by capturing communications by tapping into lines outside the US even if it contains info from/on US persons.”
Masnick’s critique underscores a key tension: OpenAI leans heavily on existing U.S. legal frameworks as a backstop, while civil liberties advocates have long argued those same frameworks are too broad or permissive in national security contexts.
OpenAI vs. Anthropic: Diverging Strategies
Although OpenAI avoids attacking Anthropic directly, parts of its explanation read as an implicit rebuttal to its rival’s refusal to sign. Anthropic has previously published its own account of why it walked away from Pentagon talks, citing concerns that its red lines on mass domestic surveillance and fully autonomous weapons would not be firmly upheld in the contracts it was offered.
OpenAI acknowledges that Anthropic identified two red lines—mass domestic surveillance and fully autonomous weapons—and explicitly says it shares those, plus a third on high‑stakes automated decision‑making. The company then explains “why we believe those same red lines would hold in our contract.”
On surveillance, OpenAI says: “It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose. We ensured that the fact that it is not covered under lawful use was made explicit in our contract.”
On weapons, it argues that “The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment.”
At the same time, OpenAI insists it did not ask the government to punish Anthropic. It explicitly tells readers, “We don’t think Anthropic should be designated as a ‘supply chain risk’, and we have made our position on this clear to the government.”
Market and Public Reaction
The public controversy has already had visible market effects. As TechCrunch reported, Altman acknowledged on X that the deal triggered a “significant backlash” against OpenAI, noting that Anthropic’s rival assistant Claude briefly overtook OpenAI’s ChatGPT in Apple’s App Store rankings on Saturday.
That kind of user response highlights how national security contracts—once largely invisible to ordinary consumers—are now an explicit factor in how people evaluate and choose AI products. With OpenAI voluntarily publishing detailed contract language and Anthropic publicly explaining why it walked away, the industry has effectively turned a classified procurement dispute into a high‑profile debate over AI ethics, safety, and power.
A Test Case for “Democratic” AI
For OpenAI, the Pentagon agreement is now a test of its claim that close cooperation with government can be reconciled with strong safeguards and democratic values. “We believe strongly in democracy,” the company wrote. “Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process.”
“We also believe our technology is going to introduce new risks in the world, and we want the people defending the United States to have the best tools,” the blog concludes in its opening section.
Whether the combination of red lines, cloud‑only deployment, legal cross‑references and human oversight can truly prevent abuse in classified environments is now at the centre of the public conversation. OpenAI has staked out a position that such cooperation is not only possible but necessary while critics warn that even the strongest promises may bend under the pressure of secrecy, geopolitics and shifting national security priorities.
Comments