Claude developer Anthropic refused to let its products be deployed for autonomous weapons or mass surveillance
Anthropic CEO Dario Amodei has warned that artificial intelligence has the capabilities to undermine democratic values.
President Donald Trump on Friday ordered the U.S. government to stop using Anthropic's artificial-intelligence models and threatened the company with "major" consequences.
"Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow," Trump said in a post on his Truth Social platform.
Trump directed all federal agencies to immediately stop using Anthropic's Claude AI models. He added that the Department of Defense and other U.S. agencies have six months to phase out Anthropic's technology.
The conflict stems from Anthropic's contract with the Defense Department, which is worth up to $200 million. Anthropic has drawn a line in the sand over how the department can use its technology. Specifically, Anthropic doesn't want its products to be used to develop autonomous weapons or for mass surveillance.
The Pentagon has said it has "no interest" in using AI for those purposes.
But something curious happened after Trump's announcement about Anthropic: OpenAI said that it had struck a government deal and that it was able to secure carve-outs.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," OpenAI CEO Sam Altman said on the X platform. The Defense Department "agrees with these principles, reflects them in law and policy, and we put them into our agreement," he added.
Meanwhile, Anthropic CEO Dario Amodei said in a blog post that his company "tried in good faith to reach an agreement with the Department of War," using the name the Trump administration prefers for the Pentagon, "making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions."
The company does "not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons," he added. And, he said, the team at Anthropic believes "that mass domestic surveillance of Americans constitutes a violation of fundamental rights."
Trump's announcement came after days of public and private back-and-forth negotiations between the Pentagon and Anthropic that occasionally turned ugly. After Amodei defended his company's refusal to budge on Thursday, a Pentagon official accused him of trying to "personally control the U.S. military."
"It's a shame that @DarioAmodei is a liar and has a God-complex," Emil Michael, the undersecretary of defense for research and engineering, wrote on X. Michael, a former executive at Uber Technologies (UBER), also accused Anthropic of having a plan to "impose" its values on Americans.
A Jan. 9 memo handed down by Defense Secretary Pete Hegseth called for AI labs to amend their defense contracts to allow "any lawful use" of their technology. Hegseth gave the department 180 days to insert that standard into any contract that involves procuring AI services.
"DoD does not want a private company's usage policy to function as a veto over lawful military applications," Jessica Tillipman, a professor at George Washington University's law school who specializes in government procurement law, told MarketWatch over email.
"If DoD makes this the default clause across all AI vendors, it eliminates vendor-by-vendor negotiation over acceptable use and signals that firms unwilling to accept that baseline will be replaced," Tillipman said. "It sounds like boring contract boilerplate, but it is really about procurement leverage."
Following Trump's announcement, Hegseth formally declared Anthropic a supply-chain risk to national security, a designation previously applied only to foreign companies, notably those based in China and believed to be under government influence. As a result of Hegseth's action, no contractor, supplier or partner that does business with the U.S. military can conduct "any commercial activity" with Anthropic.
That has potentially major implications for Anthropic, which recently raised $30 billion at a $380 billion valuation and disclosed $14 billion in annual run-rate revenue, driven mostly by increasing enterprise adoption of its products. More than 500 business customers now spend over $1 million annually on Anthropic's products.
Microsoft (MSFT), which alongside Nvidia (NVDA) signed a strategic partnership with Anthropic last November, works with the Defense Department. Amazon (AMZN), which has invested billions of dollars in Anthropic, recently detailed a plan to support the Pentagon through access to Amazon Web Services.
Before this week, Claude was the only AI model approved for classified use in the Pentagon. Claude was even used, through a Palantir Technologies (PLTR) partnership, to help the U.S. capture Venezuela's then-president, Nicolás Maduro, according to the Wall Street Journal.
However, Elon Musk's xAI signed a deal to allow its controversial Grok model to be used for classified purposes earlier this week. Officials at multiple federal agencies have raised concerns over potential safety issues with Grok, the Journal reports.
See: Amazon, Nvidia and SoftBank pour $110 billion into OpenAI - raising the stakes for AI monetization
Emily Bary contributed.
-William Gavin
This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Trump blacklists Anthropic - and OpenAI swoops in
By William Gavin
(END) Dow Jones Newswires
02-28-26 0942ET
Copyright © 2026 Dow Jones & Company, Inc.