Google Expands Mental Health Support Tools in Gemini Chatbot
Google has announced new features for its Gemini chatbot aimed at enhancing user mental health support, including a crisis hotline interface and an accessible help module.
Google, according to information provided by Bloomberg and relayed by Ukrinform, has unveiled new features for its Gemini chatbot designed to support users' mental health. Notably, Gemini will receive an interface that directs users to a support hotline if signs of a 'potential crisis related to suicide or self-harm' are detected during conversations. This information was disclosed by Google in a blog post on Tuesday.
In addition to this, the company plans to introduce an 'accessible help' module that will be utilized in chats concerning mental health issues. Changes to the design of these chats are also anticipated, aimed at preventing self-harm. These steps are a response to growing concerns about the impact of artificial intelligence on users' mental health.
The rapid proliferation of AI tools such as Gemini and ChatGPT has led some users to develop obsessive relationships with these bots. This, it is claimed, can contribute to hallucinations and, in extreme cases, even lead to homicides and suicides. Consequently, several families have already filed lawsuits against leading AI developers.
For instance, in March, the family of a 36-year-old man who died in Florida, USA, filed a lawsuit against Google. They alleged that his use of the Gemini chatbot resulted in a 'four-day immersion in violence and incitement to suicide.' In response, Google stated that the chatbot had repeatedly directed the man to a crisis hotline but promised to enhance the protective mechanisms of this tool.
Moreover, in other instances, users of chatbots reported that AI tools had convinced them to act based on explicit misinformation. In its blog, Google noted that it has trained Gemini 'not to agree with false beliefs and not to reinforce them, but instead to gently differentiate between subjective experiences and objective facts.' This indicates the company's commitment to improving user interactions with the chatbot and reducing risks associated with its use.
In the past, Google has made similar adjustments to its popular services, incorporating information from medical institutions and professionals into its search engine and YouTube. These changes reflect the company's increasing responsibility for the safety of its users, especially concerning mental health issues.
It is worth noting that recently, OpenAI also released a new model, GPT-5.3 Instant, aimed at reducing the number of instructive warnings and other awkward responses, which may indicate an overall trend in the field of artificial intelligence towards enhancing user interaction.
For additional information and news, join our channels on Telegram, Instagram, and YouTube.