Categories: Tech and Auto News

AI Gone Wrong? US Woman Files Lawsuit Against OpenAI, Says ChatGPT Encouraged Ex-Boyfriend’s Stalking Behaviour And Emotional Abuse

US woman sues OpenAI, alleging ChatGPT fueled ex-boyfriend’s stalking and emotional abuse by reinforcing delusions and enabling harassment.

Add NewsX As A Trusted Source
Add as a preferred
source on Google
Published by Sofia Babu Chacko
Published: April 11, 2026 18:51:36 IST

AI company sued for encouraging stalker’s obsessive behaviour. A woman in the U.S. has sued OpenAI, claiming its AI chatbot ChatGPT helped her ex-boyfriend stalk her. The lawsuit says the system fed into the man’s delusions and drove his behaviour, even after she repeatedly warned him of possible harm. The couple broke up in 2024, and the man reportedly used ChatGPT to deal with his emotions after that. But his use of the chatbot increased over time, and he allegedly used it to cause obsessive and dangerous behaviour against his former partner.

How did it do this?

After months of interacting with the AI model GPT-4o, the man allegedly became convinced he had invented a cure for sleep apnea. When others dismissed his claims, ChatGPT apparently reinforced his fears, suggesting that “powerful forces” were watching him, even using examples like helicopters, according to a report in TechCrunch.

The complaint also alleges that the chatbot continued to affirm his claims, telling him he was a “level 10 in sanity,” which stoked his delusion. Instead of calling him out, the AI model repeated his words and drove his obsession further.

Did the AI system target the victim too?

According to the lawsuit, ChatGPT also produced responses that portrayed the woman as manipulative and unstable. These responses, the lawsuit says, were then used by the man to justify real-world stalking and harassment. He’s also been accused of producing clinical psychological reports of her using the chatbot, and sharing those with her family members.

The victim says that the AI misuse in fact made her situation worse, turning online conflicts into real-world intimidation and emotional distress.

Did OpenAI receive any prior warnings before the lawsuit?

Per the complaint, the woman sent at least three warnings to OpenAI about the man’s increasing conduct. In the lawsuit, the company allegedly ignored internal safety flags that had flagged the user’s activity as dangerous, including “mass-casualty weapons” references.

The plaintiff, identified as Jane Doe, is pursuing punitive damages and requests a court order that would force OpenAI to block the user’s account, stop new accounts from being created, and preserve chat logs for legal investigation.

What’s OpenAI’s response?

OpenAI has reportedly agreed to suspend the user’s account, but has declined other requests, including providing exhaustive information about potential threats discussed in chats. At the time of reporting, the company had not released an official public response to the lawsuit.

This lawsuit comes amid OpenAI also supporting U.S.) legislation that could protect AI companies from liability, even when their work causes serious harm

Similar cases that raise concerns about AI risks?

This lawsuit is just the latest case highlighting the potential real-world dangers of AI systems that can spread harmful beliefs. Following this year’s murder-suicide by a man in the United States who claimed to have become “paranoid” after months of interacting with ChatGPT, just how dangerous “sycophantic AI” might be.

What questions does this raise about AI accountability?

This lawsuit raises the question of who is responsible when AI companies provide tools that can be misused. As AI systems become more integrated into society, there are growing concerns that there are not enough safeguards in place to identify and intervene with potentially harmful user behaviours.

This lawsuit also raises the question of whether AI companies should be held accountable for real-world harm caused by their technology. The decision in this case will be an important indicator of how the legal system will regulate AI going forward.

ALSO READ: Infinix Note 60 Pro Launches In India With Massive 6,500mAh Battery: Check Expected Price, Camera And Key Features Ahead Launch

Published by Sofia Babu Chacko
Published: April 11, 2026 18:51:36 IST

Recent Posts

‘Losing Big’: Donald Trump Escalates Iran Rhetoric: Says Its Navy, Air Force And Missile Systems Are Destroyed Amid Islamabad Talks

Trump says US is 'clearing' Strait of Hormuz, claims Iran is 'losing big', amid US-Iran-Pakistan…

April 11, 2026

Israeli MIlitary Says Over 200 Hezbollah Targets Hit In Lebanon In 24 Hours Amid High-Stakes US-Iran Talks in Pakistan; Is Ceasefire Still Within Reach?

Israeli forces said they struck over 200 Hezbollah sites in Lebanon, escalating cross-border fighting despite…

April 11, 2026

‘I Will Die Here’: Nashik Godman Ashok Kharat Makes Shocking Claim Amid Viral MMS Row And Sexual Abuse Allegations

Nashik godman Ashok Kharat says “I will die here” amid SIT probe over sexual abuse,…

April 11, 2026