Refuse to become a "war machine"! Over 100 Google employees jointly sign a letter: demand a red line in US military contracts

robot
Abstract generation in progress

The intense battle over military technology boundaries between the Pentagon and AI startup Anthropic is causing a strong reaction in Silicon Valley.

Over 100 Google AI research employees recently submitted a joint letter to management, demanding the company set clear red lines in its collaborations with the U.S. military, refusing to use its technology for mass surveillance or autonomous weapons systems without human involvement.

This Thursday, more than 100 Google employees sent a joint letter to Jeff Dean, Chief Scientist of Google’s AI division DeepMind, explicitly opposing the U.S. military’s use of its Gemini large model to monitor American citizens or control autonomous weapons. This action echoes Anthropic’s previous stance of refusing to grant the Pentagon “all legal uses” authorization.

Meanwhile, nearly 50 OpenAI employees and 175 Google employees also issued public statements criticizing the Pentagon’s attempt to divide tech companies to force concessions, calling on all companies to “set aside differences and unite.”

This event has cast uncertainty over Google’s upcoming partnership agreement with the military. As Anthropic faces the potential loss of a $200 million contract by Friday and being listed as a “supply chain risk,” while Elon Musk’s xAI has agreed to military terms, Silicon Valley AI companies are experiencing a fierce divide between business interests and moral principles.

Pentagon pressure sparks Silicon Valley backlash

Recently, the Pentagon exerted significant pressure on Anthropic, demanding that it allow the military to use the Claude model for “all legal purposes” in classified systems.

U.S. Secretary of Defense Pete Hegseth explicitly called for models that are “not constrained by policy.” However, Anthropic CEO Dario Amodei refused to compromise, insisting that the red lines against mass surveillance and fully autonomous weapons remain non-negotiable, stating, “I cannot agree on these principles with my conscience.”

This tough stance quickly triggered a chain reaction among other Silicon Valley companies. Google employees, in their joint letter, urged Jeff Dean to “do everything possible to prevent any deals that cross these fundamental red lines,” and expressed pride in their work. The letter also mentioned that the signatories initially planned to oppose “unlawful surveillance of any citizen,” but removed this point to increase the likelihood of their appeal’s success.

Google executives’ statements and internal moral debates

As one of Google’s most influential software engineers, Jeff Dean expressed support for Anthropic’s position. This week, he publicly opposed government use of AI to monitor Americans, pointing out that “mass surveillance violates the Fourth Amendment and chills free speech,” and that such systems are prone to abuse for political or discriminatory purposes.

Google has a complex history of handling employee activism. In 2018, a large-scale protest erupted over a partnership with the Pentagon, ultimately leading the company to abandon the contract renewal. Since then, Google has centralized decision-making processes and relaxed some AI safety protocols in its race to catch up with competitors like OpenAI and Anthropic.

However, earlier this month, over 800 employees petitioned the company to disclose how its technology supports federal immigration enforcement, indicating that internal moral scrutiny remains strong.

“Diversification” strategy and risks of an AI arms race

Facing Anthropic’s firm stance, the Pentagon is rapidly seeking alternatives. According to reports from Axios and The New York Times, the military has reached an agreement with xAI, allowing its Grok model to be used in classified systems for “all legal purposes.”

Meanwhile, negotiations between the Pentagon and Google have entered an advanced stage, and discussions with OpenAI are ongoing. The Pentagon has even threatened to invoke the Defense Production Act to forcibly requisition Anthropic’s models and is requiring defense contractors to assess their dependency on these models.

The potential risks of AI models in military applications are not unfounded. A war game simulation led by King’s College London, disclosed by Tyler Durden, showed that in 329 simulated rounds, top AI models ultimately chose to use nuclear weapons in 95% of cases. Among them, Anthropic’s Claude exhibited “mature hawkish” traits, decisively launching strikes when risks escalated to nuclear levels; other models like Gemini even deployed nuclear options early in the simulation.

Experts warn that AI’s ability to enforce “nuclear taboo” is far weaker than that of humans. In a future where military decision times are extremely compressed, unrestricted AI applications could lead to catastrophic consequences. This is a core reason why tech companies insist on setting red lines.

Risk Disclaimer and Terms of Use

Market risks are inherent; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should evaluate whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Investment is at your own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)