
OpenAI is facing increasing scrutiny after signing an agreement with the Department of Defense to supply its technology across the military’s classified networks.
On Saturday, OpenAI robotics team lead Caitlin Kalinowski resigned, citing concerns around the deal which she described as “rushed without guardrails defined.” Her criticism of the OpenAI deal cuts to the heart of a dispute about whether OpenAI’s agreement to deploy its models in classified military networks contains the meaningful protections against mass surveillance and autonomous weapons the company claims.
OpenAI announced the deal with the Pentagon just hours after Anthropic’s refusal to remove restrictions on government usage of fully autonomous weapons and mass surveillance from its own contract. In an unprecedented move against a domestic company, the Trump administration declared Anthropic a national security supply chain risk—banning its use across the federal government while preventing the military from working with any company that conducts “commercial activity” with Anthropic. Hours later, OpenAI announced its own deal to deploy OpenAI models in the U.S. government’s classified networks.
OpenAI CEO Sam Altman implied that he had found a way to retain the same red lines as Anthropic while still delivering capabilities the U.S. government sought. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he wrote. “The [Department of Defense] agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Yet the wording of the parts of the agreement released publicly suggests a set of terms far softer than Anthropic’s.
Analysts worry the deal’s provisions on surveillance leave open the possibility of lawful mass data processing and analysis—despite the terms appearing to foreclose such a situation. “They are taking very technical legal words of art such as surveillance… which ordinary people understand in one way and national security lawyers understand in other ways to muddy the waters,” said Brad Carson, president of Americans for Responsible Innovation, and a former U.S. representative and Army under secretary during the Obama administration. “To a national security lawyer, it means something very different to ordinary people.”
Under national security laws, “surveillance” has a narrow technical meaning—focused on the deliberate interception of the communications of U.S. entities, citizens, and permanent residents. But mass collection of general data from databases and the incidental capture of Americans’ communications while targeting foreigners fall outside that definition. The “lawful” use of systems to monitor non-Americans could therefore reveal vast quantities of data on American citizens, with AI systems making such data easy to access and interpret.
The government has also spent the last seven years purchasing and compiling vast quantities of consumer data ranging from social media usage to geolocation data. Surveillance laws also predate large language models that offer the ability to easily map vast quantities of data, further stretching the bounds of what might be considered “lawful use” by the government. Before large language models, such data may have been useful, but remained difficult to utilize at scale. But AI offers the government structured and detailed insights into populations and specific groups, without specifically targeting any one individual, thereby being lawful.
After OpenAI initially announced its deal with the Pentagon in early March, Altman began to field questions on X, announcing several changes in the following days as to the language of the contract. Most notably, Altman posted on X a reference to commercially acquired databases. “Our tools will not be used to conduct domestic surveillance of U.S. persons, including through the procurement or use of commercially acquired personal or identifiable information,” he said.
“The new contract language that OpenAI appears to have agreed to according to Sam Altman does seem like it’s a significant improvement over [its] previous language when it comes to surveillance,” Charlie Bullock, a senior research fellow at the Institute for Law and AI, told The Dispatch. He noted that the full contract isn’t public, “it comes down to whether we trust Sam Altman and OpenAI or whether we don’t trust them.”
Many are skeptical that OpenAI can uphold the safety restrictions it claims to have negotiated with the Pentagon. “I’m not convinced that when an impasse comes, OpenAI is going to be able to hold the line,” George Washington University law professor Jessica Tillipman said. “In most government contracts … it’s not that easy usually for the private party to just walk away.”
“The administration has already shown that if it doesn’t get its own way, it’s willing to throw a nuclear bomb in the mix,” Tillipman added, referring to the administration’s declaration of Anthropic as a supply-chain risk. In response, Anthropic has filed a lawsuit against the Pentagon over the designation, with more than 30 workers at OpenAI and Google filing an amicus brief in support of Anthropic.
Nevertheless, the show of force from the Trump administration may indicate significant limitations in the leverage that OpenAI could plausibly have.
OpenAI has emphasized that it will use its technical “safety stack”—the set of filters and controls OpenAI wraps around its model before deploying it—to monitor usage and ensure that its red lines on surveillance and weapons systems “are not crossed.” Yet this may be insufficient. “The technical stack, whether or not a model is ‘aligned,’ is not going to be a sufficient bulwark to offset illegal use,” Sarah Shoker, a senior research scholar at the Berkeley Risk and Security Lab, and the former lead of OpenAI’s geopolitics team told The Dispatch. For example, the U.K.’s AI Security Institute has found jailbreaks—methods of bypassing safety controls—in every system it has tested.
Even with OpenAI engineers working with the Pentagon and providing visibility, as OpenAI claims will happen, it is unclear that OpenAI would—or could, if the use case is lawful—prevent the administration from using the model for surveillance. “Oftentimes the only recourse is the government acts, and you fight it out later,” Tillipman told The Dispatch.
Altman is also facing scrutiny from his own employees. In a post on X, he said the company “shouldn’t have rushed” the deal, claiming it looked “opportunistic and sloppy.” One hundred OpenAI employees have signed a public letter calling for companies to refuse domestic mass surveillance and fully autonomous weapons systems. Jasmine Wang, who works on AI safety for OpenAI, posted that she wanted “independent legal counsel” to analyze the contract language. OpenAI research scientist Aidan McLaughlin declared that he didn’t “think the deal was worth it,” while simultaneously praising the “volume and thoughtfulness of discussion internally.”
“Very very few people at OpenAI are excited about facilitating mass surveillance, or autonomous weaponry,” Tim Marple, who worked on OpenAI’s intelligence and investigations team in 2024, told The Dispatch. “That’s why you see him waffling on this, because they face a catastrophic loss of labor if they actually follow through on something that is so opposite to the alignment of their various teams … These engineers can just go to Anthropic.”
For Marple, the writing for a deal like this has long been on the wall. “I quit the day after the election, because I knew that this kind of thing was happening, and I’m actually surprised it took this long to happen,” he said.
OpenAI’s posture toward the military has been updated a number of times, initially completely prohibiting military sales until January 2024 when it updated its policies to remove the blanket ban. Yet as early as 2023, the Pentagon was reportedly testing OpenAI models through Microsoft. Earlier this year, both OpenAI and Anthropic participated in a competition to produce voice-controlled autonomous drone swarming—technology that allows AI to coordinate large numbers of drones as a single unit, without a human controlling each one.
Meanwhile, competition between Anthropic and OpenAI is intensifying.Anthropic is reportedly on track to generate annual revenue of almost $20 billion, more than doubling its revenue since the end of 2025. According to analysts at Epoch AI, a nonprofit research organization focusing on the future of AI, Anthropic could overtake OpenAI in annualized revenue by mid-2026 if current trends continue.
















