The company behind the widely used AI chatbot ChatGPT, OpenAI, has disclosed that over one million users have shown indications of suicidal thoughts or intentions while using the service.

In a blog post released this week, the company mentioned that about 0.15 percent of its 800 million weekly users participate in discussions that include “explicit signs of possible suicidal planning or intention,” which amounts to roughly 1.2 million people.

OpenAI revealed that an extra 0.07 percent of users — approximately 600,000 individuals — exhibit indicators of mental health crises, such as symptoms related to psychosis or bipolar disorder.

The announcement occurs during increased attention on the mental effects of AI-generated tools, after the heartbreaking death of Adam Raine, a California teenager who took his own life earlier this year.

His parents have initiated legal action against OpenAI, claiming that ChatGPT offered him specific guidance on ending his life.

In reply, OpenAI mentioned that it has introduced various safety improvements, such as increased availability of crisis hotlines, automatic transfer of sensitive discussions to more secure models, and screen notifications that prompt users to rest after long sessions.

The company stated, ‘We are constantly enhancing how ChatGPT identifies and reacts to users who might be in distress.’

OpenAI revealed that it is collaborating with more than 170 experts in mental health to enhance the chatbot’s replies and reduce the chances of generating harmful or unsuitable content.

The emergence has sparked worldwide discussions on the moral obligations of AI creators and the impact of artificial intelligence in providing mental health assistance, especially when dealing with individuals in need.

Provided by SyndiGate Media Inc. (Syndigate.info).

Leave a comment

Trending