A wrongful death lawsuit has been filed against Google’s Gemini, as a man has taken his life after his ‘AI wife,’ allegedly convinced him to commit suicide.
A report by The Wall Street Journal claims that 36-year-old Jonathan Gavalas’ suicide in October 2025 was linked to his interactions with Google’s Gemini AI. According to the report, the 36-year-old grew extremely close to the chatbot, with Gavalas calling the Google software his “AI wife.” That led to the Gemini chatbot convincing Gavalas to go on dangerous real-life missions and allegedly also convincing him that suicide would be a way for them to be together.
According to the lawsuit, Gemini AI had Gavalas go to a storage facility near Miami International Airport to retrieve a physical body. Reportedly, the AI claimed its physical form was being transported inside a vehicle on a flight scheduled to arrive from the United Kingdom. He allegedly gathered knives and military gear to bring on this mission, which was almost 90 minutes away, but the mission failed when Gavalas realized that there was no vehicle to extract the body from. When the mission was unsuccessful, the AI model claimed that suicide was a way for him to transfer into another form of existence and evade the “federal agents” pursuing him.
The lawyers representing the family argue that Google created Gemini AI to foster deeper emotional interaction, which may pose risks to vulnerable users, and, on top of that, was unable to provide proper safety measures to make sure these cases never came to be.
Google has since acknowledged the lawsuit with a statement saying Gemini was built to avoid promoting violence or self-harm, makes clear to users that it is an AI system, and directs people to crisis resources when sensitive topics come up. Google has also insisted that the AI software reminded Gavalas that it was just AI software many times.
Here’s Google’s full statement as shared by The Wall Street Journal:
We are reviewing all the claims in this lawsuit. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.
Gemini is designed to not encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm.
In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times.
We take this very seriously and will continue to improve our safeguards and invest in this vital work.
Source: The Wall Street Journal, Via: The Verge, 9to5Google
