Anthropic CEO Dario Amodei sat across from Defense Secretary Pete Hegseth. According to multiple media outlets including NPR and CNN, the meeting was “polite” in tone, but the content was anything but.
Hegseth delivered a final ultimatum: by 5:01 PM Friday, lift restrictions on Claude’s military use, allowing the Pentagon to deploy it for “all lawful purposes,” including autonomous weapon targeting and domestic mass surveillance.
Otherwise, cancel the $200 million contract, invoke the Defense Production Act for compulsory requisition, and list Anthropic as a “supply chain risk,” effectively blacklisting it as an adversary entity akin to Russia and China.
On the same day, Anthropic quietly released version 3.0 of its “Responsible Expansion Policy” (RSP 3.0), removing a core promise since the company’s founding: that they would not train more powerful models unless safety measures could be guaranteed.
Also on that day, Elon Musk posted on X: “Anthropic is mass stealing training data, that’s a fact.” Community notes on X added reports that Anthropic paid $1.5 billion in settlement for training Claude using pirated books.
Within 72 hours, this AI company claiming to have a “soul” played three roles simultaneously: safety martyr, intellectual property thief, and Pentagon traitor.
Which is the real one?
Maybe all of them.
The Pentagon’s “Either comply or get out” story
The first layer of the story is simple.
Anthropic is the first AI company granted classified access by the U.S. Department of Defense. The contract, awarded last summer, has a cap of $200 million. Subsequently, OpenAI, Google, and xAI also secured contracts of similar scale.
According to Al Jazeera, Claude was used in a U.S. military operation in January this year, reportedly involving the kidnapping of Venezuelan President Maduro.
But Anthropic drew two red lines: no support for fully autonomous weapon targeting, and no support for large-scale surveillance of U.S. citizens. Anthropic believes AI is not reliable enough to control weapons, and there are no current laws or regulations governing AI in mass surveillance.
The Pentagon isn’t buying it.
White House AI advisor David Sacks publicly accused Anthropic last October on X of “weaponizing fear to capture regulation.”
Competitors have already capitulated. OpenAI, Google, and xAI all agree to let the military use their AI in “all lawful scenarios.” Musk’s Grok just got approval to access classified systems this week.
Anthropic is the last standing.
As of press time, Anthropic stated in its latest release that they have no intention to back down. But the Friday 5:01 deadline is looming.
An anonymous former DOJ and DOD liaison told CNN: “How can you simultaneously declare a company a ‘supply chain risk’ and force it to work for your military?”
Good question, but it’s not within the Pentagon’s considerations. They care about whether Anthropic will compromise—if not, they’ll take coercive measures or discard it as Washington’s pariah.
“Distillation attack”: a slap in the face
On February 23, Anthropic published a strongly worded blog accusing three Chinese AI companies of conducting an “industrial-scale distillation attack” on Claude.
The accused are DeepSeek, Moonshot AI, and MiniMax.
Anthropic claims they used over 24,000 fake accounts to initiate more than 16 million interactions with Claude, targeting its core reasoning, tool invocation, and programming capabilities.
They categorize this as a national security threat, asserting that distilled models are “unlikely to retain safety guardrails” and could be exploited by authoritarian governments for cyberattacks, disinformation, and mass surveillance.
The narrative is perfect, and the timing is impeccable.
Just after the Trump administration eased export controls on Chinese chips, and as Anthropic sought ammunition for lobbying against chip export restrictions.
But Musk shot first: “Anthropic is mass stealing training data and paid billions in settlement. That’s a fact.”
Tory Green, co-founder of AI infrastructure firm IO.Net, said: “You train your model on the entire web’s data, then others learn from your public API—that’s called ‘distillation attack’?”
Anthropic calls it an “attack,” but in the AI industry, this is common practice. OpenAI used it to compress GPT-4, Google to optimize Gemini, and even Anthropic itself does it. The only difference is, this time, they are the target.
Singapore Nanyang Technological University AI professor Erik Cambria told CNBC: “The line between legal use and malicious exploitation is often blurry.”
Ironically, Anthropic paid $1.5 billion in settlement for training Claude with pirated books, yet now accuses others of using its public API to learn from it. This isn’t double standards; it’s triple standards.
Anthropic wanted to play the victim but ended up as the defendant.
Dismantling the safety promise: RSP 3.0
On the same day as its standoff with the Pentagon and public spat with Silicon Valley, Anthropic released version 3 of its Responsible Expansion Policy.
Anthropic Chief Scientist Jared Kaplan said in an interview: “We believe stopping AI training doesn’t help anyone. In the context of rapid AI development, making unilateral commitments… while competitors accelerate, is pointless.”
In other words, if others don’t play fair, neither will we.
Core to RSP 1.0 and 2.0 was a strict promise: if a model’s capabilities exceed safety measures, training would be paused. This promise earned Anthropic a unique reputation in AI safety circles.
But 3.0 removes that.
Instead, it introduces a more “flexible” framework, separating safety measures that Anthropic can implement from safety recommendations requiring industry-wide collaboration. They plan to release risk reports every 3-6 months, reviewed by external experts.
Sounds responsible?
Chris Painter, an independent reviewer from nonprofit METR, said after reviewing early drafts: “This indicates that Anthropic believes it needs to enter a ‘triage mode’ because its risk assessment and mitigation methods can’t keep pace with capability growth. It more likely reflects society’s unpreparedness for AI’s potentially catastrophic risks.”
According to TIME, Anthropic spent nearly a year internally debating this rewrite, with CEO Amodei and the board unanimously approving. The official reason: the original policy aimed to foster industry consensus, but the industry never caught up. The Trump administration’s laissez-faire attitude toward AI development, even attempting to repeal state regulations, left federal AI legislation in limbo. Despite some hope for global governance frameworks in 2023, three years later, that door is closed.
A long-time AI governance researcher said bluntly: “RSP is Anthropic’s most valuable brand asset. Removing the pause on training is like an organic food company secretly tearing off the ‘organic’ label from its packaging and claiming their testing is now more transparent.”
Identity crisis under a $380 billion valuation
In early February, Anthropic raised $300 million at a $380 billion valuation, with Amazon as a cornerstone investor. Since founding, it has achieved an annualized revenue of $14 billion. Over the past three years, this figure has grown more than tenfold annually.
Meanwhile, the Pentagon threatened to blacklist it. Musk publicly accused it of data theft. Its core safety commitments were removed. After CTO Mrinank Sharma resigned, he tweeted: “The world is in danger.”
Contradiction?
Perhaps contradiction is in Anthropic’s DNA.
Founded by former OpenAI executives worried about OpenAI’s rapid push on safety, they built their own company to develop more powerful models faster, while telling the world how dangerous these models are.
Their business model can be summarized as: we fear AI more than anyone, so you should pay us to build it.
This narrative worked perfectly in 2023-2024. AI safety was a hot topic in Washington, and Anthropic was the leading lobbyist.
By 2026, the tide turned.
“Woke AI” became a slur, state-level AI regulation bills were blocked by the White House, and although California’s SB 53, supported by Anthropic, was signed into law, federal action was virtually nonexistent.
Anthropic’s safety brand is slipping from “differentiation advantage” to “political liability.”
It is walking a tightrope: it needs to appear “safe” enough to maintain its brand, but also “flexible” enough not to be abandoned by markets and governments. The problem is, both tolerances are shrinking.
How much is the safety narrative worth?
Stacking these three issues together makes the picture clear.
Accusing Chinese companies of distilling Claude is to reinforce the narrative for chip export controls. Removing safety pause commitments to stay in the arms race. Rejecting Pentagon’s autonomous weapons demand to preserve the last moral veneer.
Each step makes sense logically, but they contradict each other.
You can’t claim Chinese companies’ “distillation” endangers national security and simultaneously remove your own safety pause commitments. If models are truly so dangerous, you should be more cautious, not more aggressive.
Unless you’re Anthropic.
In the AI industry, identity isn’t defined by your statements but by your balance sheet. Anthropic’s “safety” narrative is essentially a brand premium.
In the early AI arms race, this premium was worth money. Investors paid higher valuations for “responsible AI,” governments greenlit “trustworthy AI,” and customers paid for “safer AI.”
But by 2026, that premium is evaporating.
Anthropic now faces not a “whether to compromise” dilemma but a “whom to compromise first” ranking. Compromise with the Pentagon damages its brand. Compromise with competitors nullifies safety promises. Compromise with investors means losing both.
By Friday at 5:01 PM, Anthropic will deliver its answer.
But one thing is certain: the Anthropic that once thrived on “we’re different from OpenAI” is becoming just like everyone else.
The end of an identity crisis often means the disappearance of identity itself.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Anthropic's 72 Hours of Identity Crisis
Writing by: Ada, Deep Tide TechFlow
Tuesday, February 24. Washington, Pentagon.
Anthropic CEO Dario Amodei sat across from Defense Secretary Pete Hegseth. According to multiple media outlets including NPR and CNN, the meeting was “polite” in tone, but the content was anything but.
Hegseth delivered a final ultimatum: by 5:01 PM Friday, lift restrictions on Claude’s military use, allowing the Pentagon to deploy it for “all lawful purposes,” including autonomous weapon targeting and domestic mass surveillance.
Otherwise, cancel the $200 million contract, invoke the Defense Production Act for compulsory requisition, and list Anthropic as a “supply chain risk,” effectively blacklisting it as an adversary entity akin to Russia and China.
On the same day, Anthropic quietly released version 3.0 of its “Responsible Expansion Policy” (RSP 3.0), removing a core promise since the company’s founding: that they would not train more powerful models unless safety measures could be guaranteed.
Also on that day, Elon Musk posted on X: “Anthropic is mass stealing training data, that’s a fact.” Community notes on X added reports that Anthropic paid $1.5 billion in settlement for training Claude using pirated books.
Within 72 hours, this AI company claiming to have a “soul” played three roles simultaneously: safety martyr, intellectual property thief, and Pentagon traitor.
Which is the real one?
Maybe all of them.
The Pentagon’s “Either comply or get out” story
The first layer of the story is simple.
Anthropic is the first AI company granted classified access by the U.S. Department of Defense. The contract, awarded last summer, has a cap of $200 million. Subsequently, OpenAI, Google, and xAI also secured contracts of similar scale.
According to Al Jazeera, Claude was used in a U.S. military operation in January this year, reportedly involving the kidnapping of Venezuelan President Maduro.
But Anthropic drew two red lines: no support for fully autonomous weapon targeting, and no support for large-scale surveillance of U.S. citizens. Anthropic believes AI is not reliable enough to control weapons, and there are no current laws or regulations governing AI in mass surveillance.
The Pentagon isn’t buying it.
White House AI advisor David Sacks publicly accused Anthropic last October on X of “weaponizing fear to capture regulation.”
Competitors have already capitulated. OpenAI, Google, and xAI all agree to let the military use their AI in “all lawful scenarios.” Musk’s Grok just got approval to access classified systems this week.
Anthropic is the last standing.
As of press time, Anthropic stated in its latest release that they have no intention to back down. But the Friday 5:01 deadline is looming.
An anonymous former DOJ and DOD liaison told CNN: “How can you simultaneously declare a company a ‘supply chain risk’ and force it to work for your military?”
Good question, but it’s not within the Pentagon’s considerations. They care about whether Anthropic will compromise—if not, they’ll take coercive measures or discard it as Washington’s pariah.
“Distillation attack”: a slap in the face
On February 23, Anthropic published a strongly worded blog accusing three Chinese AI companies of conducting an “industrial-scale distillation attack” on Claude.
The accused are DeepSeek, Moonshot AI, and MiniMax.
Anthropic claims they used over 24,000 fake accounts to initiate more than 16 million interactions with Claude, targeting its core reasoning, tool invocation, and programming capabilities.
They categorize this as a national security threat, asserting that distilled models are “unlikely to retain safety guardrails” and could be exploited by authoritarian governments for cyberattacks, disinformation, and mass surveillance.
The narrative is perfect, and the timing is impeccable.
Just after the Trump administration eased export controls on Chinese chips, and as Anthropic sought ammunition for lobbying against chip export restrictions.
But Musk shot first: “Anthropic is mass stealing training data and paid billions in settlement. That’s a fact.”
Tory Green, co-founder of AI infrastructure firm IO.Net, said: “You train your model on the entire web’s data, then others learn from your public API—that’s called ‘distillation attack’?”
Anthropic calls it an “attack,” but in the AI industry, this is common practice. OpenAI used it to compress GPT-4, Google to optimize Gemini, and even Anthropic itself does it. The only difference is, this time, they are the target.
Singapore Nanyang Technological University AI professor Erik Cambria told CNBC: “The line between legal use and malicious exploitation is often blurry.”
Ironically, Anthropic paid $1.5 billion in settlement for training Claude with pirated books, yet now accuses others of using its public API to learn from it. This isn’t double standards; it’s triple standards.
Anthropic wanted to play the victim but ended up as the defendant.
Dismantling the safety promise: RSP 3.0
On the same day as its standoff with the Pentagon and public spat with Silicon Valley, Anthropic released version 3 of its Responsible Expansion Policy.
Anthropic Chief Scientist Jared Kaplan said in an interview: “We believe stopping AI training doesn’t help anyone. In the context of rapid AI development, making unilateral commitments… while competitors accelerate, is pointless.”
In other words, if others don’t play fair, neither will we.
Core to RSP 1.0 and 2.0 was a strict promise: if a model’s capabilities exceed safety measures, training would be paused. This promise earned Anthropic a unique reputation in AI safety circles.
But 3.0 removes that.
Instead, it introduces a more “flexible” framework, separating safety measures that Anthropic can implement from safety recommendations requiring industry-wide collaboration. They plan to release risk reports every 3-6 months, reviewed by external experts.
Sounds responsible?
Chris Painter, an independent reviewer from nonprofit METR, said after reviewing early drafts: “This indicates that Anthropic believes it needs to enter a ‘triage mode’ because its risk assessment and mitigation methods can’t keep pace with capability growth. It more likely reflects society’s unpreparedness for AI’s potentially catastrophic risks.”
According to TIME, Anthropic spent nearly a year internally debating this rewrite, with CEO Amodei and the board unanimously approving. The official reason: the original policy aimed to foster industry consensus, but the industry never caught up. The Trump administration’s laissez-faire attitude toward AI development, even attempting to repeal state regulations, left federal AI legislation in limbo. Despite some hope for global governance frameworks in 2023, three years later, that door is closed.
A long-time AI governance researcher said bluntly: “RSP is Anthropic’s most valuable brand asset. Removing the pause on training is like an organic food company secretly tearing off the ‘organic’ label from its packaging and claiming their testing is now more transparent.”
Identity crisis under a $380 billion valuation
In early February, Anthropic raised $300 million at a $380 billion valuation, with Amazon as a cornerstone investor. Since founding, it has achieved an annualized revenue of $14 billion. Over the past three years, this figure has grown more than tenfold annually.
Meanwhile, the Pentagon threatened to blacklist it. Musk publicly accused it of data theft. Its core safety commitments were removed. After CTO Mrinank Sharma resigned, he tweeted: “The world is in danger.”
Contradiction?
Perhaps contradiction is in Anthropic’s DNA.
Founded by former OpenAI executives worried about OpenAI’s rapid push on safety, they built their own company to develop more powerful models faster, while telling the world how dangerous these models are.
Their business model can be summarized as: we fear AI more than anyone, so you should pay us to build it.
This narrative worked perfectly in 2023-2024. AI safety was a hot topic in Washington, and Anthropic was the leading lobbyist.
By 2026, the tide turned.
“Woke AI” became a slur, state-level AI regulation bills were blocked by the White House, and although California’s SB 53, supported by Anthropic, was signed into law, federal action was virtually nonexistent.
Anthropic’s safety brand is slipping from “differentiation advantage” to “political liability.”
It is walking a tightrope: it needs to appear “safe” enough to maintain its brand, but also “flexible” enough not to be abandoned by markets and governments. The problem is, both tolerances are shrinking.
How much is the safety narrative worth?
Stacking these three issues together makes the picture clear.
Accusing Chinese companies of distilling Claude is to reinforce the narrative for chip export controls. Removing safety pause commitments to stay in the arms race. Rejecting Pentagon’s autonomous weapons demand to preserve the last moral veneer.
Each step makes sense logically, but they contradict each other.
You can’t claim Chinese companies’ “distillation” endangers national security and simultaneously remove your own safety pause commitments. If models are truly so dangerous, you should be more cautious, not more aggressive.
Unless you’re Anthropic.
In the AI industry, identity isn’t defined by your statements but by your balance sheet. Anthropic’s “safety” narrative is essentially a brand premium.
In the early AI arms race, this premium was worth money. Investors paid higher valuations for “responsible AI,” governments greenlit “trustworthy AI,” and customers paid for “safer AI.”
But by 2026, that premium is evaporating.
Anthropic now faces not a “whether to compromise” dilemma but a “whom to compromise first” ranking. Compromise with the Pentagon damages its brand. Compromise with competitors nullifies safety promises. Compromise with investors means losing both.
By Friday at 5:01 PM, Anthropic will deliver its answer.
But one thing is certain: the Anthropic that once thrived on “we’re different from OpenAI” is becoming just like everyone else.
The end of an identity crisis often means the disappearance of identity itself.