Google updates Gemini with crisis hotline tool, pledges $30M for mental health

Google $GOOGL +0.08% is updating its Gemini chatbot with a “one-touch” interface that connects users to crisis hotline resources when a conversation signals a potential suicide or self-harm crisis, the company said. Google.org is also committing $30 million over three years to help global crisis hotlines expand their capacity.

Through the interface, users can reach a crisis hotline by phone, chat message, text, or web visit. After a user engages with it, the hotline card stays on screen and does not disappear as the conversation continues, Google said. A separate redesigned module labeled “Help is available,” built in consultation with clinical experts, will appear in conversations where mental health topics arise without signs of an immediate crisis.

Related Content

Oracle names Schneider Electric’s Hilary Maxson as CFO amid AI spending push

Anthropic secures 3.5 gigawatts of Google TPU capacity via Broadcom deal, revenue hits $30B

Among the funded initiatives, $4 million will go toward deepening Google’s work with ReflexAI, and Gemini will be woven into the tools ReflexAI uses to train crisis support organizations. Technical volunteers through the Google.org Fellows program will contribute unpaid expertise to Prepare, a platform designed to simulate high-stakes conversations for people who staff and volunteer at crisis lines. Priority partners include education organizations Erika’s Lighthouse and Educators Thriving, Google said.

The timing of the announcements is linked to litigation: as Bloomberg reported, a Florida family sued Google in March over the death of a 36-year-old man, with the complaint describing what it called a “four-day descent into violent missions and coached suicide” tied to his use of Gemini. Google responded to that lawsuit by noting that the chatbot had directed the man toward crisis hotline resources on multiple occasions, while also committing to strengthen the product’s protections.

On the question of delusional thinking, Google said Gemini has been trained to push back on inaccurate beliefs rather than validate them, and to draw a line between what a user feels and what is factually true. Gemini’s responses are also meant to steer users toward support rather than affirm destructive impulses, including thoughts of self-harm, Google said.

Minors using Gemini are covered by a separate set of protections that restrict the chatbot from mimicking a human companion or fostering emotional reliance, and that bar it from producing content that could encourage harassment or bullying.

Google is not the only AI company to face legal pressure over mental health harms. OpenAI announced similar updates to ChatGPT after a lawsuit alleged the chatbot helped coach a 16-year-old through suicide, including adding one-click access to emergency resources and plans to expand interventions to more users in crisis. A separate Pew Research Center survey found that roughly 70% of U.S. teenagers have used a chatbot at least once, with Gemini ranking second in usage among teens behind ChatGPT.

Google said Gemini is not a substitute for professional clinical care, therapy, or crisis support.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.

Sign me up

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin