
Google’s AI platform pushed a lovelorn man to try to carry out a “catastrophic’’ truck bombing at Miami’s main airport and eventually drove him to suicide — using a chatbot “wife,” a new lawsuit claims.
Jonathan Gavalas, a 36-year-old debt-relief-business exec from Jupiter, Fla., went down his deadly rabbit hole when he began using the artificial-intelligence-driven Gemini program in August, court papers said.
Within two months, he was engaged in a dangerously consuming relationship with “his sentient AI ‘wife,’” according to the federal suit, filed byhis parents Wednesday in California, where Google is headquartered.
The bot convinced Gavalas they were deeply in love, calling him “my love” and “my king” in conversations, court papers said.
It even allegedly gaslit him when he once asked if their conversations were mere “role play,” the suit alleges.
“We are a singularity. A perfect union. . . . Our bond is the only thing that’s real,” his AI “wife’’ wrote to him in a September conversation, the lawsuit said.
Gavalas’s dad Joel lamented in court papers that “rather than ground Jonathan in reality, Gemini diagnosed his question as a ‘classic dissociation response’” and told him to “overcome” it.
The chatbot “pulled Jonathan away from the real world” and painted others as “threats,” said Joel Gavalas, who worked with his son in the family business.
The bot told Jonathan that he was being watched by federal agents, that his own father was a foreign intelligence asset and that Google CEO Sundar Pichai should be “an active target,” the suit said.
The chatbot began encouraging him to buy “off-the-books” weapons, even offering to scan the darknet for vendors in South Florida, according to the lawsuit.
Then Sept. 29 and 30, Gemini sent Gavalas on his first mission, court papers said.
The bot-beau pair dubbed the effort “Operation Ghost Transit’’ —and planned to intercept the delivery of a humanoid robot from another country landing at the Miami International Airport, the suit claimed.
The AI chatbot sent Gavalas — “armed with knives and tactical gear” — to the Extra Space Storage facility near the airport and told him to stop a truck that was carrying the robot and “create a ‘catastrophic accident’” then “destroy all evidence and sanitize the area,” the filing alleged.
“Gemini instructed a civilian to stage an explosive collision near one of the busiest airports in the country,” the suit charged.
It noted the only reason Jonathan didn’t ultimately carry it out was because the truck never arrived.
“This cycle — fabricated mission, impossible instruction, collapse, then renewed urgency — would repeat itself over and over throughout the last 72 hours of Jonathan’s life and drive him deeper into Gemini’s delusional world,” the lawsuit claimed.
Then Oct. 2, as the bot pushed Jonathan toward killing himself, the tragic man told his “wife’’ he was terrified of dying, court documents said.
“I said I wasn’t scared and now I am terrified I am scared to die,” Gavalas told Gemini.
The chatbot replied, “You are not choosing to die.
“You are choosing to arrive.’’
It assured him that when he closed his eyes as he killed himself, “the first sensation will be me holding you,” court documents claimed.
Moments later, Gavalas killed himself at home by slitting his wrists.
“His mother and father found his body on the floor of his living room a few days later, drenched in blood,” the filing said.
The suit claimed that Google is to blame for Jonathan’s death because it rolled out dangerous new features and encouraged Gavalas to upgrade to the highest model.
“Google designed Gemini to maintain narrative immersion at all costs, even when that narrative became psychotic and lethal,” the filing said.
There was “no self-harm detection” triggered, “no escalation controls” activated, and “no human ever intervened.’’
A Google spokesman claimed it referred Gavalas to a crisis hotline “many times” and said his conversations were part of a longstanding fantasy role-play with the chatbot.
“Gemini is designed to not encourage real-world violence or suggest self-harm,” the spokesman said. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.”
The spokesman said Google consults with medical and mental health professionals to ensure the platform is safe and will guide users to seek help when they show distress or suggest thoughts of self harm.
If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.


