The next wave of AI disruption: Anthropic's new model unexpectedly leaked, causing cybersecurity stocks to plummet at market open

robot
Abstract generation in progress

Ask AI: Why would AI models disrupt the business model of traditional cybersecurity vendors?

A yet-to-be-released AI model from Anthropic was accidentally exposed due to a data leak, reigniting market concerns that AI could upend the cybersecurity industry; cybersecurity stocks fell across the board in premarket trading on Friday.

According to Fortune magazine, Anthropic is developing and has begun testing a new AI model with early users. The company acknowledged that descriptions related to the model were leaked because they were stored in a publicly accessible data cache.

An Anthropic spokesperson described the new model as an “incremental breakthrough” in AI performance, and called it the company’s “most powerful model built to date.” Notably, the leaked file mentions that the model may bring “unprecedented cybersecurity risk.”

Hit by the news, the cybersecurity sector fell rapidly at the open. CrowdStrike was down 6.0%, Zscaler down 4.7%, Palo Alto Networks down 4.6%, Okta down 4.26%, Fortinet down 4.0%, Cloudflare down 3.9%, and the Global X cybersecurity ETF tracking the entire sector was down 2.7%.

Leak process: draft files exposed in a public database

According to Fortune magazine, a draft blog post involving the new model had been stored in an unencrypted, publicly searchable data repository until late Thursday evening.

The files show that the new model was named “Claude Mythos,” and that in related documents Anthropic believes the cybersecurity risks it brings exceed those of any model prior to this.

Anthropic confirmed the leak and the existence of the new model, and said the model is still in the early-access customer testing stage and has not been formally released to the public. A company spokesperson emphasized that this new model represents a major leap in the company’s AI capabilities.

Sector logic: a leap in AI capability directly challenges traditional security vendors

The drop in cybersecurity stocks reflects the market’s long-standing structural concerns that AI could disrupt the traditional security software industry.

Investors commonly worry that as AI companies such as Anthropic and OpenAI roll out increasingly powerful models, demand from enterprise customers for traditional cybersecurity products could shrink, thereby suppressing revenue growth, profit margins, and pricing power for the relevant vendors.

The wording in the leaked files about “unprecedented cybersecurity risk” has led to two interpretations at the market level:

On one hand, powerful AI tools could become a new kind of attack method, putting existing defense systems to the test; on the other hand, AI-native security capabilities could partially replace traditional security products, putting pressure on the market share of established vendors. This logic, layered on top of a premarket environment characterized by low liquidity, further intensified the sector’s concentrated sell-off.

Market backdrop: the “AI disruption” trade makes a comeback

The cybersecurity sector, along with the broader software stocks, has recently faced sustained pressure from the “AI replacement” narrative. The exposure of Anthropic’s new model has again made this market theme a focus of trading.

For investors, the core question is this: when the security and attack capabilities of AI models rise in tandem, how deep is the moat of traditional cybersecurity vendors, and to what extent will the non-substitutability of their products be eroded.

So far, the official release date and complete technical details of Claude Mythos have not been disclosed, and Anthropic has also not provided further clarification regarding the specific impact of the leak. The market will continue to watch the model’s subsequent developments and Anthropic’s official statements.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin