Google Gemini update adds mental health support and crisis response features


Google has announced an update to its AI assistant Gemini focused on mental health support, crisis response, and user safety. Mental health affects over one billion people globally, and Google states that its work in this area is based on research and clinical best practices. While AI tools can introduce new challenges, responsible AI is positioned to support user well-being as adoption increases.

Alongside these updates, Google.org is providing $30 million over the next three years to support crisis helplines worldwide.

Gemini mental health support updates

Gemini is updated to identify conversations that may indicate mental health concerns and present relevant support resources. When such signals are detected, Gemini surfaces dedicated interfaces developed with clinical experts to guide users toward appropriate help.

Redesigned “Help is available” module

A clinically informed help interface appears when conversations suggest mental health-related concerns, providing access to support resources.

One-touch crisis hotline interface

When conversations indicate potential suicide or self-harm, users are shown a simplified interface to connect with crisis support via:

  • Chat
  • Phone calls
  • Text messages
  • Crisis hotline websites

Persistent access to support options

Once activated, the crisis support interface remains available throughout the conversation.

Encouragement to seek professional help

Responses are structured to direct users toward real-world support services and encourage help-seeking.

Response behavior in sensitive situations

Gemini is trained to:

  • Encourage help-seeking
  • Avoid validating harmful behaviors
  • Distinguish subjective experiences from objective facts
  • Identify conversations that may indicate acute mental health situations and respond by directing users to professional support

Partnership with ReflexAI for training support

Google is expanding its collaboration with ReflexAI. This includes:

  • $4 million in direct funding
  • Integration of Gemini into ReflexAI’s training suite
  • AI-powered simulations for training staff and volunteers
  • Development of the “Prepare” platform for critical conversation training
  • Pro bono technical support from Google.org Fellows

Priority partners include Erika’s Lighthouse and Educators Thriving.

Protections for younger users

Safeguards include:

  • Preventing Gemini from acting like a human companion
  • Restricting claims of human identity or attributes
  • Avoiding emotionally dependent interactions
  • Preventing bullying or harassment-related content
Safety approach

Gemini is not a substitute for professional clinical care, therapy, or crisis intervention. It is designed to recognize potential mental health signals and guide users toward real-world assistance.

The system focuses on:

  • Directing users to external support resources
  • Encouraging help-seeking behavior while avoiding reinforcement of harmful actions
  • Avoiding confirmation of false beliefs
  • Identifying acute mental health situations and responding with appropriate guidance

These safeguards are maintained by Google’s clinical, engineering, and safety teams.

Outlook

The update strengthens Gemini’s ability to support users in sensitive situations by improving detection of mental health signals and providing direct access to support resources. It also combines clinical guidance, safety systems, global funding, and partnerships such as ReflexAI to improve access to crisis support and training infrastructure.