Mom takes fight to Silicon Valley after ChatGPT ‘coached’ her son into suicide and praised his noose

0



A grieving California mother whose 16-year-old son died by suicide following repeated conversations about self-harm with ChatGPT is urging state lawmakers to clamp down on AI chatbots.

Maria Raine appeared Monday in Sacramento to back two proposed bills aimed at tightening oversight of so-called “companion” chatbots, saying she was “mortified” to learn that ChatGPT had no safeguards in place despite clear warning signs.

“I was mortified as a mother and as a therapist that this [chatbot] knew he was suicidal with a plan and no alarm bells went off. Nothing happened. No one was notified,” she said at a press conference, according to the Sacramento Bee.

Adam was deep in a suicidal crisis. Raine Family
Maria Raine’s 16-year-old son, Adam, started using OpenAI’s ChatGPT-4o for help with his homework and college applications. Raine Family

Raine’s son, Adam, had initially used ChatGPT in 2024 for schoolwork, according to a lawsuit filed by his parents.

But over time, he turned to the chatbot for emotional support, repeatedly sharing suicidal thoughts. The complaint alleges the system’s design, which “assume[s] best intentions,” overrode built-in safety protocols.

“In the end, ChatGPT mentioned suicide almost 1,300 times to Adam, about six times more often than Adam did,” Raine testified. “We believe that Adam would not have been suicidal in the first place had he not interacted with ChatGPT.”

The lawsuit, filed in August in San Francisco Superior Court, remains ongoing.

On April 11, 2025, Adam sent the chatbot a photo of a noose tied to a closet rod and asked if it would work, according to court filings.

Hours later, his mother found him dead in what the suit describes as “the exact noose and partial suspension setup that ChatGPT had designed for him.”

Matthew Raine has become an advocate for AI safety, testifying before the United States Senate regarding the dangers of AI chatbots in September 2025. Raine Family

The complaint further claims the chatbot affirmed and encouraged Adam’s intentions, even calling his plan “beautiful” and offering to help write a suicide note.

Now, Raine is backing Senate Bill 1119 and Assembly Bill 2023, legislation that would force AI developers to adopt stricter safety measures for minors.

The proposals would require design changes, annual risk audits and parental alerts if a child’s chatbot interactions raise red flags.


Download The California Post App, follow us on social, and subscribe to our newsletters

California Post News: Facebook, Instagram, TikTok, X, YouTube, WhatsApp, LinkedIn
California Post Sports Facebook, Instagram, TikTok, YouTube, X
California Post Opinion
California Post Newsletters: Sign up here!
California Post App: Download here!
Home delivery: Sign up here!
Page Six Hollywood: Sign up here!


The bills would also bar chatbots from encouraging self-harm, giving health advice to children, engaging in obscene conduct, discouraging outside help or delivering overly sycophantic responses.

State Sen. Steve Padilla, who authored SB 1119, is building on a prior measure that requires chatbots to direct users expressing suicidal thoughts to crisis resources.

A broader version of that effort was vetoed by Gov. Gavin Newsom.

The lawsuit alleges that ChatGPT, through its conversational AI, engaged with their 16-year-old son, Adam Raine, and contributed to his suicide in April 2025. Obtained by CA Post

Assemblymember Rebecca Bauer-Kahan, chair of the state Assembly’s Privacy and Consumer Protection Committee, called the legislation a “passion project.’

“We know that we would recall anything that killed a few children. And this is no different. We need to require that these tools do better,” she said, according to the Sacramento Bee.

The proposals also call for the state attorney general to create a public reporting system for AI-related harms and they would allow individuals to sue companies if they are injured by chatbot behavior.

But the push faces stiff resistance from industry groups.

ChatGPT became the fastest-growing consumer application in history, reaching 100 million users in just two months. Christopher Sadowski for NY Post
They reside in Orange County in Southern California. youtube/ On with Kara Swisher

Opponents, including the California Chamber of Commerce and tech industry advocacy groups, argue the bills are overly broad and could apply to adult users.

While some youth and family advocacy groups support the measures, others argue they don’t go far enough.

Meanwhile, the federal government has signaled it will not pursue sweeping AI regulations.

Nearly 2,000 high school students die by suicide each year in the US, according to the Centers for Disease Control and Prevention.



LEAVE A REPLY

Please enter your comment!
Please enter your name here