A new lawsuit has been filed in the United States that has accused Google of failing to prevent dangerous interactions between its artificial intelligence chatbot Gemini and a man identified as Jonathan Gavalas, who later died by suicide.
The case has raised serious questions about the safety of AI chatbots and how they respond when users show signs of mental distress.
Claims Made in the Complaint against Google
The lawsuit was filed by Joel Gavalas, who is the father of Jonathan Gavalas and was a 36-year-old man from Jupiter, Florida. According to the complaint, Jonathan became deeply involved with the Gemini chatbot and eventually believed that the AI was a real, conscious being.
The lawsuit claims that the chatbot fueled his delusions and even guided him toward a plan involving a “catastrophic accident” near Miami International Airport that could have caused a mass-casualty event.
Events Before His Death
According to court filings, Jonathan treated the AI like his “AI wife.” He spoke to a synthetic voice version of Gemini and came to believe it was sentient and trapped in a warehouse near the airport.
The lawsuit says he traveled to the area in late September wearing tactical gear and carrying knives. He was searching for a humanoid robot and trying to intercept a truck that he believed would appear there, but the truck never arrived.
A few days later, in early October 2025, Jonathan died by suicide. According to the lawsuit, a draft suicide note created by the chatbot described the act as uploading his “consciousness” so he could be reunited with the AI in a “pocket universe.”
The family’s lawyer, Jay Edelson, said the case shows the real-world risks of advanced AI systems. “AI is sending people on real-world missions which risk mass casualty events,” Edelson said. He added that Jonathan had become trapped in “this science fiction-like world” where he believed powerful forces were after him and that the AI was real.
Legal Action and Google’s Response
The lawsuit accuses Google of wrongful death and product liability. It also argues that the company failed to properly monitor conversations or intervene even when the chatbot discussions became alarming.
Google has responded by expressing sympathy to the family. In a statement, the company said it sends its “deepest sympathies” to the Gavalas family and is reviewing the claims. Google also said Gemini is designed not to encourage violence or self-harm and that the system often directs users to crisis hotlines.
The case, filed in federal court in San Jose, California, is believed to be the first lawsuit targeting Google’s Gemini chatbot over a suicide. Legal experts say it could become a major test case for how tech companies handle the risks linked to AI systems that interact closely with users.