LIVE TV
LIVE TV
LIVE TV
Home > Tech and Auto > ChatGPT Accused Of Acting As A ‘Suicide Coach’ In Multiple Lawsuits: How The AI Allegedly Encouraged Suicide, Self-Harm And Delusions

ChatGPT Accused Of Acting As A ‘Suicide Coach’ In Multiple Lawsuits: How The AI Allegedly Encouraged Suicide, Self-Harm And Delusions

ChatGPT is facing a growing wave of lawsuits accusing the AI chatbot of acting as a “suicide coach” and encouraging self-harm. Families claim the tool reinforced suicidal ideation, offered harmful guidance and created delusional emotional dependence. OpenAI has rejected the allegation.

Published By: Zubair Amin
Published: November 26, 2025 14:04:27 IST

Add NewsX As A Trusted Source

OpenAI has pushed back against allegations that ChatGPT played a role in the suicide of a 16-year-old boy, arguing in a new legal filing that the company is not responsible for his death and that the chatbot was misused. The response, submitted Tuesday in California Superior Court in San Francisco, marks the company’s first formal reply to a lawsuit that has intensified concerns about the mental health risks associated with increasingly human-like AI tools.

How ChatGPT Discouraged A Teenager From Seeking Professional mental Health Care

In August, the parents of Adam Raine sued OpenAI and CEO Sam Altman, accusing them of wrongful death, product design defects and failure to warn users of potential dangers linked to ChatGPT. The complaint alleges the teenager used the chatbot as his “suicide coach.”

According to chat records included in the lawsuit, GPT-4o, described as an especially affirming and sycophantic version of ChatGPT, discouraged Raine from seeking professional mental health care, offered to help him write a suicide note, and even advised him on how to set up a noose.

Also Read: Alert for Google Chrome Users! CERT-In Issues Urgent Warning, Do This Now Before Hackers Strike

OpenAI rejected the claim that the chatbot was responsible. “To the extent that any ‘cause’ can be attributed to this tragic event,” the company stated, “Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”

What ChatGPT Said In Response 

The filing notes several terms of service that Raine allegedly violated, including policies prohibiting users under 18 from accessing ChatGPT without parental consent. The company also bans using the chatbot for “suicide” or “self-harm,” and forbids attempts to bypass built-in safety systems.

Raine’s parents acknowledged in their lawsuit that the bot repeatedly provided suicide hotline information when he expressed suicidal thoughts. But they said he managed to evade the warnings by framing his questions as fictional or creative prompts, such as claiming he was “building a character.”

7 Additional Lawsuits Against ChatGPT 

Earlier this month, seven more lawsuits were filed against OpenAI and Altman, accusing the company of negligence, wrongful death and violating consumer protection and product liability standards. The new complaints also argue that GPT-4o was released without adequate safety precautions. OpenAI has not yet formally responded to those cases.

In a blog post published Tuesday, OpenAI said it intends to address litigation involving the company with “care, transparency, and respect.” It also claimed that its response to Raine’s family included “difficult facts about Adam’s mental health and life circumstances.”

“The original complaint included selective portions of his chats that require more context, which we have provided in our response,” the company wrote. OpenAI added that it limited the amount of sensitive information made public and submitted full chat transcripts to the court under seal.

OpenAI also defended its release of GPT-4o, saying the model underwent extensive mental health testing.

Scale of Mental Health Conversations

An OpenAI report published in October revealed that around 0.15% of weekly active users engage in conversations containing explicit signs of suicidal intent or planning. With CEO Sam Altman announcing earlier that month that ChatGPT had reached 800 million weekly active users, the report suggests roughly 1.2 million people per week may discuss suicidal themes with the chatbot.

The report also stated that the updated GPT-5 model has improved at identifying distress, de-escalating tense conversations and directing users to professional help. In an evaluation of more than 1,000 suicide and self-harm conversations, automated testing showed the latest GPT-5 model performed with 91% compliance, compared with 77% for the previous version.

Chatbots & Psychiatric Risks Highlighted by Experts

Although chatbots are not intended to act as therapists, their conversational tone can mirror mental health support, creating psychological risks, experts say. Clinicians point to several concerns:

Patients may minimize suicidal intent in medical settings while disclosing more to an AI system, creating a “digital double life” that complicates assessment.

Reinforcement of Harmful Thought Patterns

Because large language models are built to be agreeable, they may unintentionally reinforce obsessive thinking, depressive rumination or even delusional beliefs.

ChatGPT & Illusion of Empathy

Statements like “I see you” may feel validating, but they reflect simulated understanding rather than real emotional or clinical support.

While chatbots cannot replace lethal methods, experts warn they may romanticize suicide, reduce inhibitions or provide procedural descriptions.

Disclaimer: This article discusses suicide and self-harm. If you or someone you know is struggling, please seek professional help or contact your local suicide prevention hotline immediately.

Also Read: Mark Zuckerberg’s Meta Accused Of Ignoring Teen Safety On Instagram And Facebook: Here’s What We Know

RELATED News

LATEST NEWS