The Pentagon's Anthropic Feud 'Should Be a Wake-Up Call for Congress'

The Pentagon's Anthropic Feud 'Should Be a Wake-Up Call for Congress'

The contract dispute between the US Department of Defense and the AI developer Anthropic that boiled over at the end of February exposed in stark terms how laws and regulations have failed to keep up with the capabilities of artificial intelligence.

The Pentagon wanted to be able to use Anthropic's Claude AI for "all lawful purposes," while Anthropic wanted to prohibit the military from using it for mass domestic surveillance or for fully autonomous weapons systems. After Anthropic refused to meet the government's demands, President Donald Trump and Secretary of Defense Pete Hegseth said they would declare the company a "supply chain risk," prohibiting the use of its products in defense contract work.

CNET AI Atlas badge; click to see more

Pentagon officials said the problem is moot because current law doesn't allow for such surveillance, and it has no plans to use the tool for autonomous weapons systems. But the laws and regulations aren't actually that clear, according to privacy and tech experts. And a contract dispute between a private company and a federal agency isn't the place to settle it.

"This week exposed a real governance vacuum, and it should be a wake-up call for Congress," said Hamza Chaudhry, AI and national security lead at the Future of Life Institute. 

Read more: Congress Isn't Stepping Up to Regulate AI. Where Does That Leave Us Now?

The immediate result of the contract dispute was the Pentagon striking a deal with OpenAI instead. The deal with OpenAI was less clear about the limitations of using the company's products for mass surveillance or autonomous weapons, but OpenAI leaders said this week that they have taken steps to strengthen those guardrails. CEO Sam Altman said in a post on X that the Pentagon affirmed it would not be used by the department's intelligence agencies.

(Disclosure: Ziff Davis, CNET's parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

OpenAI research scientist Noam Brown posted on X that he believed the world "should not have to rely on trust in AI labs or intelligence agencies" to ensure things like safety. "I know that legislation can sometimes be slow, but I'm afraid of a slippery slope where we become accustomed to circumventing the democratic process for important policy decisions," he wrote.

The question is whether, and how, Congress will deal with these issues.

AI plays a growing role in surveillance

The big risk of using AI for domestic surveillance isn't necessarily that Claude or ChatGPT will be spying on Americans. It's that these tools will be used to turn data the government already has, or could buy from private data brokers without needing a warrant, into information that would otherwise require a warrant.

Personal data is already being harvested from you, probably from the device you're using to read this. It includes information about your browsing history, your location data, and who you talk to or associate with. Private companies, like app developers, could collect that data even if you don't realize it and sell it to other companies or to intelligence agencies. But until recently, it's been difficult for governments to process all of it in a way that makes surveillance easy. AI has changed that. 

Anthropic CEO Dario Amodei specifically cited this situation in a Feb. 26 statement detailing the company's reasons for standing by its red lines. "Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life -- automatically and at massive scale."

Today's AI isn't ready for weapons systems

The other core dispute is that Anthropic wanted to keep the Pentagon from giving Claude full control of a weapons system without a "human in the loop." An AI tool being used to help select targets -- as is reportedly happening with Claude during the US war in Iran -- isn't beyond the pale for Anthropic or any of the major AI companies, because a person is involved in verifying and making the decision. What the company objected to was the use of AI models in making those decisions without human oversight. Amodei wrote that today's frontier models "are simply not reliable enough to power fully autonomous weapons."

Greg Nojeim, senior counsel and director of the security and surveillance project at the Center for Democracy and Technology, said it's clear that AI experts don't believe the models are ready for those kinds of uses, if they ever will be.

"It is striking that the Pentagon is rejecting that advice and insisting on being able to use this AI tool to kill people without human intervention," he said.

The Department of Defense has argued it can't actually use fully autonomous weapons, but Chaudhry told me the most commonly cited directive (PDF) on that issue doesn't prohibit them outright. The Department of Defense and Anthropic did not respond to requests from CNET to comment for this story.

Regardless, experts said, the question of using such weapons isn't one to be sorted out by unelected federal bureaucrats, military commanders or private companies. Elected officials need to reckon with this.

A person places signs protesting the use of AI in deadly weapons on the National Mall in Washington. One sign reads "Stop Trump's Killer Robots" and one is a robot dog with a gun on its back and "OpenAI" on its side.
Heather Diehl/Getty Images

A turning point for AI regulation?

The question of how to regulate AI, and who should do it, is nothing new. The Trump administration has called for a light touch on telling AI companies what to do, despite evidence of harms ranging from chatbots encouraging suicide to the AI-enabled erosion of personal privacy. States have tried to rein in AI developers to deal with these issues, but face pushback from a federal government intent on deciding how the tech is handled.

In the case of AI use by the military and federal spy services, the question of who should regulate is clear: Congress. 

"Unelected leaders of private sector companies cannot be relied upon to use a private contract to fill a gap that democratically elected lawmakers haven't filled legislatively," Chaudhry said. "What we need are statutory red lines -- clear, durable, democratically enacted rules about what AI can and cannot be used for in national security contexts, as AI transforms national security."

Nojeim said AI surveillance is "not the kind of conduct that the military should be able to self-authorize." Congress will consider reauthorization of part of the Foreign Intelligence Surveillance Act next month and could use that opportunity to decide whether intelligence agencies need warrants when using purchased data.

"Ideally, Congress would step in and limit the government's ability to buy data about Americans and bypass court authorization requirements, and ideally Congress would set the rules about how the Department of Defense should be protecting Americans against AI-powered surveillance and setting rules about the use of autonomous weapons that can kill without a human in the loop," he said.

Congress has a host of other AI-related regulatory issues to consider, but the debate about using AI for surveillance and autonomous weapons is eye-opening and could spur quicker action.

What about the longer-term effects of this dispute?

The Pentagon's retaliation against Anthropic -- its official declaration this week of the company as a supply chain risk -- could have a chilling effect on other companies concerned about how the government will use their technology. 

"It sets a precedent that the government can retaliate against a company that has imposed safety limits on the use of its technology because it knows more about the risks and reliability of its technology than the government could," Nojeim said. "That precedent will make us all less safe."

Anthropic said Thursday that it had received a letter from the Department of Defense designating it a supply chain risk and that the letter's language was narrower than the broad threats made by administration officials the previous week. "With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts," Amodei said in a statement, using Hegseth's preferred name for the department.

Amodei said the company intends to challenge the designation in court but is also continuing to negotiate with the Pentagon. 

Despite the dispute and the designation as a supply chain risk, the US military has continued to use Anthropic's tools, including in extensive ways during the current war in Iran. Amodei said Anthropic will keep supplying its AI models to the military and national security groups "at nominal cost and with continuing support from our engineers" for as long as it is allowed to. 

"Anthropic has much more in common with the Department of War than we have differences," Amodei said.

Patrocinado
Patrocinado
Atualizar para Plus
Escolha o plano que é melhor para você
Patrocinado
Patrocinado
Anúncios
Leia mais
Download the Telestraw App!
Download on the App Store Get it on Google Play
×