AI Company ‘commited to serve all of humanity.’ The Contract They Signed with DoW Says Otherwise.

OpenAI is receiving allegations of entering a dubious contract with the Department of War (DoW). Fears of the DoW using artificial intelligence on fully autonomous weapons and mass surveillance are circulating online.
Anthropic, the company behind Claude.ai, released a statement regarding their fallout with the DoW. The company was concerned over the use of AI models for “domestic mass surveillance and fully autonomous weapons.” Thus, they did not agree to the DoW’s terms.
DoW Secretary Pete Hegseth deemed Anthropic a supply chain risk through a scathing rant on X. He wrote, “Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”

Sam Altman, the CEO of OpenAI, wrote on X, “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.” Moreover, Altman believes that the agreement offered to his company by the DoW are terms “everyone should be willing to accept.”

He added, “We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.”
Understanding OpenAI’s agreement with DoW
What’s in the agreement, and why was Anthropic unwilling to accept the terms? This is the fineprint, as posted by the official website of OpenAI.

OpenAI is legally compliant, but do the current laws suffice to mitigate the harm of unrestricted mass surveillance? Even if there are approvals to be made for fully autonomous weapons, who will be doing these checks, and will they be required to sign their names when authorizing actions?
Dario Amodei, CEO of Anthropic, explained why citizen rights may be undermined by AI in an interview with 60 Minutes.
“As we defend ourselves from our autocratic adversaries, we have to do so in ways that defend our democratic values,” Amodei continued and said that he agreed to all use cases in the DoW except two.
“One is domestic mass surveillance. They’re worried that things may become possible with AI that weren’t possible before.” He makes the examples of data collected by private firms and bought en masse by the government. Large-scale data collection is not illegal, but it was not done before, as it was impractical before AI. But with AI, federal agencies like ICE can buy personal data and have the tool to process these datasets with.
“There is an oversight question too. If you have a large army of drones or robots that can operate without any human oversight… That presents concerns, and we need to have a conversation about how that’s overseen, and we haven’t had that conversation yet.”
Amodei believes that the laws have not caught up with AI, and neither has Congress. Although OpenAI and Altman deny that their technology will be used for mass surveillance and autonomous weapons, it’s uncertain if the current interpretations of the law and the Fourth Amendment will suffice as guardrails.
Have a tip we should know? [email protected]