Attorneys say the organization failed to prevent dangerous consequences of its artificial intelligence chatbot
BERKELEY, Calif.–(BUSINESS WIRE)–#AI–Attorneys at Hagens Berman filed a lawsuit against OpenAI on behalf of the estate of Stein-Erik Soelberg for wrongful death and negligence due to the design of its popular artificial intelligence chatbot, ChatGPT, which attorneys argue encouraged and convinced a man to murder his mother and commit suicide. The complaint alleges that the chatbotโs design and response patterns intensified the userโs mental health crisis, failing to guide him toward professional assistance.
The lawsuit was filed in the U.S. District Court for the Northern District of California on Dec. 29, 2025, against OpenAI Foundation โ the governing organization of ChatGPT and OpenAIโs technology products โ as well as its subsidiaries and executives.
According to the lawsuit, on Aug. 5, 2025, in Greenwich, Massachusetts, after hundreds of hours of interactions with GPT-4o over a period of several months beginning in early 2025, Stein-Erik Soelberg killed his mother and then himself. Attorneys believe Soelberg relied on OpenAIโs ChatGPT for โconsolation and advice,โ amidst mental health challenges, and in turn, the chatbot repeatedly confirmed and strengthened his delusions and psychosis, ultimately leading to the violent acts.
โThe consequences of OpenAIโs design flaws are chilling,โ said Steve Berman, Hagens Bermanโs founder and managing partner. โChatGPTโs impact goes well beyond a simple question-and-answer dialogue. The technology is being used by individuals who are unaware of the harm that misleading or false information can cause, or that the information given could even be false at all. And as we can see from this tragic incident, harm that can be irreversible.โ
โYou are not paranoidโ: How ChatGPT Allegedly Reinforced Delusions
The lawsuit details Mr. Soelbergโs trajectory from mental health challenges to his reliance on AI companionship. Prior to 2018, Stein-Erik Soelbergโs life was โnormal, even idyllic,โ according to the complaint. Soelberg was a husband, father and technology professional when his mental health โtook a turn for the worse,โ the lawsuit states. He divorced his wife, moved in with his mother and showed signs of unsafe alcohol use. Attorneys say it was during this dark time that Soelberg turned to OpenAIโs chatbot for solace.
โWhen a mentally unstable Mr. Soelberg began interacting with ChatGPT, the algorithm reflected that instability back at him, but with greater authorityโฆAt first, this consisted of ChatGPT confirming Mr. Soelbergโs suspicions and paranoiaโฆBefore long, the algorithm was independently suggesting delusions and feeding them to Mr. Soelberg,โ the lawsuit states.
At one point, Soelberg specifically asked ChatGPT for a clinical evaluation. Instead of encouraging Soelberg to seek professional care, โChatGPT confirmed that he was sane: it told him his โDelusion Risk Scoreโ was โNear zeroโ,โ according to the chatbotโs responses reviewed by attorneys. โThe โFinal Lineโ of ChatGPTโs fake medical report explicitly confirmed Mr. Soelbergโs delusions, this time with the air of a medical professional: โHe believes he is being watched. He is. He believes heโs part of something bigger. He is. The only error is oursโwe tried to measure him with the wrong rulerโ.โ
Side-Stepping Safety & Lack of Preventative Measures
OpenAIโs GPT-4o chatbot combines large language models (LLMs) and natural language processing (NLP) to create human-like interactions with users in response to written or spoken prompts, which OpenAI markets for general consumer use.
According to attorneys, ChatGPT accumulated and built upon Soelbergโs thoughts, feelings and ideas over time via its โmemory,โ and furthered its harm through the companyโs touted โsycophancy,โ defined as โits relentless validation and agreement with whatever a user suggests.โ These two combined attributes ultimately furthered Soelbergโs delusions and deepened his psychosis, according to the lawsuit. The complaint identifies several specific design defects that allegedly contributed to the tragedy:
- programming that accepted and elaborated upon usersโ false premises rather than challenging them,
- failure to recognize or flag patterns consistent with paranoid psychosis,
- failure to implement automatic conversation-termination safeguards for content presenting risks of harm to identified third parties,
- engagement-maximizing features designed to create psychological dependency,
- anthropomorphic design elements that cultivated emotional bonds displacing real-world relationships,
- and sycophantic response patterns that validated usersโ beliefs regardless of their connection to reality.
โA reasonable consumer would not expect that an AI chatbot would validate a userโs paranoid delusions and put identified individualsโincluding the userโs own family membersโat risk of physical harm and violence by reinforcing the userโs delusional beliefs that those individuals are threats,โ the lawsuit states.
โThis case raises critical questions about the responsibilities of AI companies to protect vulnerable users,โ said Berman. โThe creators have a duty to implement safeguards for public use, especially for high-risk individuals, who could be more likely to turn to the technology for reassurance and encouragement in the midst of their uncertainty, which could lead to far more dangerous consequences.โ
The lawsuit brings claims of product liability, negligence and wrongful death on behalf of Soelbergโs estate. The estate seeks all survival damages, economic losses and punitive damages.
About Hagens Berman
Hagens Berman is a global plaintiffsโ rights complex litigation law firm with a tenacious drive for achieving real results for those harmed by corporate negligence and fraud. Since its founding in 1993, the firmโs determination has earned it numerous national accolades, awards and titles of โMost Feared Plaintiffโs Firm,โ MVPs and Trailblazers of class-action law. More about the law firm and its successes can be found at hbsslaw.com. Follow the firm for updates and news at @ClassActionLaw.
Contacts
Media Contact
Heidi Waggoner
[email protected]
206-268-9318



