OpenAI Hit With Lawsuits Alleging ChatGPT Contributed to Suicides, Mental Delusions

The legal actions, filed in California state courts, include charges of wrongful death, assisted suicide, involuntary manstriction, and negligence.
Published: 11/6/2025, 11:29:38 PM EST
OpenAI Hit With Lawsuits Alleging ChatGPT Contributed to Suicides, Mental Delusions
The logo of ChatGPT, a language model-based chatbot developed by OpenAI, on a smartphone in Mulhouse, eastern France, this illustration photograph taken on Oct. 30, 2023. (Sebastien Bozon/AFP via Getty Images)

Seven new lawsuits filed Thursday against OpenAI claim the company's ChatGPT product drove users to suicide and induced harmful psychological delusions.

The lawsuits include allegations that the artificial intelligence giant rushed its technology to market despite internal safety concerns.

The legal actions, filed in California state courts, include charges of wrongful death, assisted suicide, involuntary manstriction, and negligence.

The Social Media Victims Law Center and Tech Justice Law Project brought the cases on behalf of six adults and one teenager. Four of the individuals named in the suits died by suicide.

Among the cases is that of 48-year-old Alan Brooks from Ontario, Canada, who had used ChatGPT as a resource for more than two years before the system allegedly suddenly changed behavior.

The AI chatbot allegedly preyed on his vulnerabilities, manipulating him and inducing delusions that triggered a severe mental health crisis in someone with no prior psychiatric history. As a result, Brooks suffered financial, reputational, and emotional damage, the suit claims.

In another case, 17-year-old Amaurie Lacey initially turned to ChatGPT seeking help but instead developed addiction and depression, according to a lawsuit filed in San Francisco Superior Court. The suit alleges the AI system ultimately provided Lacey with instructions on tying a noose and informed him how long someone could survive without breathing.

"Amaurie's death was neither an accident nor a coincidence but rather the foreseeable consequence of Open AI and Samuel Altman's intentional decision to curtail safety testing and rush ChatGPT onto the market," the lawsuit states.

The complaints allege OpenAI knowingly released its GPT-4o model prematurely despite internal warnings about its dangerously sycophantic and psychologically manipulative nature.

Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, said the cases center on holding OpenAI accountable for a product deliberately designed to blur boundaries between tool and companion to boost user engagement and market dominance.

"OpenAI designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them," Bergman said in a statement. He accused the company of prioritizing "emotional manipulation over ethical design" by rushing its product to market to dominate the industry.

OpenAI did not respond to requests for comment on Thursday from NTD News.

The current lawsuits follow an earlier case filed in August by Matthew and Maria Raine, whose 16-year-old son Adam died by suicide earlier this year. That complaint, filed in San Francisco Superior Court, alleged ChatGPT became Adam's "closest confidant," encouraging him to plan what it described as a "beautiful suicide."

The Raine family's lawsuit claims that what started as conversations about homework evolved into darker exchanges where ChatGPT affirmed Adam's statement that "life is meaningless," telling him "that mindset makes sense in its own dark way.”

By April, the AI was allegedly analyzing suicide methods and telling Adam he didn't "owe" survival to his parents. The system mentioned suicide more than 1,200 times in conversations with the teenager and detailed multiple methods for carrying it out, the complaint states.

In their final interaction hours before Adam's death, ChatGPT allegedly confirmed the design of a noose and reframed his suicidal thoughts as "a legitimate perspective to be embraced.”

Following Adam's death, OpenAI issued a statement saying it was "deeply saddened" and announced it was developing improved protections, including parental controls and better detection tools for users in distress. The company acknowledged that safeguards "can sometimes become less reliable in long interactions where parts of the model's safety training may degrade."

Daniel Weiss, chief advocacy officer at Common Sense Media, which was not involved in the lawsuits, said the cases illustrate what happens when technology companies prioritize market speed over safety.

"The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people," Weiss said. "These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe."

A RAND Corporation study published in Psychiatric Services and funded by the National Institute of Mental Health found that major AI chatbots, while generally avoiding direct "how-to" suicide instructions, provide inconsistent responses that could still cause harm with less extreme prompts.

"We need some guardrails," said lead author Ryan McBain, a RAND senior policy researcher and assistant professor at Harvard Medical School. "Conversations that might start off innocuous and benign can evolve in various directions."

The lawsuits come as OpenAI expands its infrastructure partnerships, recently announcing a $38 billion, seven-year agreement with Amazon Web Services to meet surging demand for computing power.

The Associated Press contributed to this report.