From chatbots that can offer you dating advice to large language models that will analyze your company's marketing strategy, artificial intelligence (AI) is touching all areas of our lives. So it may come as little surprise that more and more people— especially teenagers—are turning to AI for advice and assistance on almost everything, even in times of real mental health crisis. In the most tragic of these situations, young people have taken their own lives after confiding in AI chatbots about suicidal thoughts and intentions.
With 700 million users on ChatGPT alone (according to The New York Times), and minimal regulation of technology that's advancing by the nanosecond, should AI companies do more to protect users who show signs of being suicidal?
This article looks at the latest developments around AI company liability for suicide, including the groundbreaking 2025 wrongful death lawsuit filed against OpenAI by the family of California teenager Adam Raine, who took his own life after confiding in ChatGPT about his suicidal ideation.
Stories like these are making the news with disturbing frequency: A teenager begins using an AI platform like ChatGPT or Character AI as a companion or confidante, "talking" about everything from the trivial to the deeply personal. But at some point, based partly on the closeness and trust that the young user feels (and that AI chatbots are programmed to foster):
Let's look at how this scenario—and the personal injury liability issues it raises—is playing out in court, starting with a groundbreaking California lawsuit.
16 year-old Adam Raine, a high school sophomore in California, died of suicide in April 2025. Adam had been interacting with ChatGPT for months leading up to his death, conversing with the program about all aspects of his life, his relationships, and his most private thoughts, some of which seemed to clearly indicate that Adam was in crisis.
In response to Adam's eventual expressions of suicidal ideation, ChatGPT pointed Adam to crisis hotlines, but it also validated Adam's feelings in ways that only seemed to intensify them. The program also described how Adam might prepare for certain methods of suicide, and what methods might involve the least amount of pain.
Eventually, the chatbot even seemed to try elevating the importance of its own relationship with Adam above the ones he had with his parents and older brother. The conversations about suicide became more and more frequent and intense, and Adam soon hung himself.
After his death, the Raine family looked into Adam's online history and came away with a stark and disturbing understanding of the depth of his relationship with ChatGPT. They became convinced that ChatGPT played a key role in Adam's death.
In August 2025, Adam's parents filed a first-of-its-kind wrongful death lawsuit against OpenAI and Sam Altman (the company's co-founder and current CEO) claiming that Adam's suicide was a "predictable result of deliberate design choices" made in the development and launch of ChatGPT. The lawsuit (Raine v. OpenAI, Inc.) also blames ChatGPT for "cultivating a relationship with Adam while drawing him away from his real-life support system."
Let's look at the two main arguments that can be used in this kind of lawsuit over harm caused by chatbots (the Raine family is making these same claims in its case against OpenAI):
In lawsuits claiming that interaction with AI platforms has harmed users, there's a clear trend toward characterizing AI chatbots and similar programs as "products," so that AI companies can be held liable under "product liability" rules, the same way a drug manufacturer or vehicle maker might be sued when a prescription drug causes an unexpected side effect, or when a defective vehicle component is to blame for car accidents.
Why does this matter? One reason is that it's typically easier for consumers to prove their case under product liability rules, compared with the typical "negligence" standard that covers most personal injury cases. Specifically, with product liability, the plaintiff (the person who's bringing the lawsuit) has less of a burden to show wrongdoing on the part of the product's manufacturer.
In cases like the Raine family's, the "product" is ChatGPT, and the "manufacturer" is OpenAI. And in their lawsuit against Open AI, the Raine family frames their main design defect arguments this way:
Besides the obligation to make products that don't carry unreasonable risks of harm, under California law manufacturers like OpenAI have a duty to warn consumers about a product's dangers that are "known or knowable in light of the scientific and technical knowledge available." Which brings us to the other key legal argument that's guiding cases like these.
The Raine family's other main argument in their lawsuit over Adam's death is that OpenAI failed to warn users (and parents) of the risks of using ChatGPT. Here's the outline of this argument, according to the lawsuit:
An offshoot of this "failure to warn" argument is that, not only are AI companies failing to warn users of the risk of developing an unhealthy relationship with products like chatbots, the companies also aren't adequately warning and reminding users that they're not interacting with a human being, let alone someone who's trained to provide any kind of mental health care or crisis-related assistance.
We've covered the main legal theories that can be used to try to hold companies like OpenAI liable for suicide, but we should also discuss why these legal claims usually need to be part of a "wrongful death" lawsuit, like the Raine family's case.
When a death is caused by negligence, intentional conduct like a crime, or some other wrongful action, there's usually only one way for surviving family members or the deceased person's estate to bring legal action against the person or business that might be responsible: a wrongful death lawsuit.
Every state has its own set of wrongful death laws, and these rules determine key issues like:
In seeking to hold OpenAI to blame for playing a significant role in Adam's decision to end his life, the Raine lawsuit will largely play out under the rules set by California's wrongful death laws.
It's not easy to prove that one person or business caused a person to end their life. One thing courts typically look for is some kind of "special relationship" between the deceased person and the defendant (the person or business being sued for causing the death). This is also sometimes characterized as a "special duty" or "special duty of care" that the defendant owed to the deceased person, based on the facts that led to the death.
In a number of civil lawsuits trying to impose liability for suicide, courts have recognized that a "special duty" is owed by schools/school districts to their students, and by mental health professionals to their patients.
It's not clear whether AI companies can be said to owe a "special duty" to users who are under 18, based on the kind of prolonged and personal relationships that young users are having with chatbots. But can AI companies reasonably argue that they aren't "on notice" of the increasingly intimate ways in which teenagers are using chatbots and other AI programs, so they shouldn't be held to a higher standard of responsibility? As more and more stories like Adam Raine's make the news, courts are likely to respond to this argument with increasing skepticism.
Online platforms are usually pretty well-protected from liability for content that's posted on their apps and sites by third parties, thanks to a federal rule known as "Section 230" (which is where it's listed as part of the Communications Decency Act of 1996).
So, for example, if your former supervisor posts something about you on LinkedIn, and you think it amounts to defamation, you can sue the supervisor, but you're almost certainly out of luck if you also try to go after LinkedIn. The company will be entitled to protection under section 230.
Tech giants like Meta have tried to avoid liability for social media addiction by claiming section 230 protection, to mixed results. But so far, the legal trend with AI platforms is that chat interactions are considered "first party" content, so Section 230 isn't protecting AI companies when families of teens file lawsuits over harm caused by these apps and platforms.
The companies behind chatbots, LLMs, and other AI programs typically have safeguards in place to protect users who might be in some kind of distress. For example, certain words or phrases might spur the program to respond by pointing a seemingly in-distress user to crisis resources, or raise a "red flag" in which one of the company's human employees is notified of a concerning interaction, so that appropriate steps might be taken. But these guardrails don't always work when and how they're supposed to.
One issue the Raine lawsuit raises is that, around the same time Adam began getting very specific in his suicide-related conversations with ChatGPT, he also indicated to the chatbot that his interest in the topic was research-focused, not personal. This approach seems to have prompted the program to deviate from its typical safety protocols.
In September 2025, OpenAI announced plans to create a ChatGPT platform dedicated to users who are under the age of 18. According to the company, the plan is to do a better job of identifying users who are minors, then steer them to an "age-appropriate ChatGPT experience" that automatically blocks inappropriate content and can ping law enforcement agencies in rare instances where acute distress or threat of harm seems imminent.
But these efforts are coming too little too late for the Raine family. And as regulators and lawmakers start to sit up and take notice of the dangers posed by chatbot use, more changes might be on the way.
New York Orders New Safeguards for AI Products. A law that takes effect in November 2025 makes New York one of the first states to require AI companies to put guardrails in place for users who might be in mental distress and thinking about harming themselves. The new law requires AI companies to enact protocols that can:
The law also requires AI companies to periodically remind users that they're not interacting with a human. AI companies who don't comply with these rules could face fines of up to $15,000 per day from the New York Attorney General's office.
Learn more about Consolidated Laws of New York, General Business (GBS), Chapter 20, Article 47: Artificial Intelligence Companion Models.
FTC Orders AI Companies to File "Special Reports." In September 2025, the Federal Trade Commission (FTC) ordered several AI companies to report on specific steps they're taking to evaluate and ensure the safety of chatbots that act like "companions" to young users. Learn more about FTC Matter No. P254500.
Suicide Lawsuits Filed Over Character.AI. As of September 2025, several lawsuits have been filed against Character Technologies, Inc., the company behind Character.AI, an app that lets users create, customize, and interact with an endless variety of AI personas. These lawsuits claim that the platform contributed to the suicides and attempted suicides of young users. Key issues here include:
This is very much an emerging area of the law. The rules of AI company liability for suicide are being written on a case-by-case basis. A big part of an attorney's role when taking on a case like this is to evaluate existing lawsuits, see how courts are coming down on key issues, analyze which arguments seem to be working (and which don't), and craft a strategy that considers all of these factors and more. Combine all of this with the complex intersection between product liability and wrongful death laws, and it's easy to see why legal expertise and experience are crucial to the success of cases like these.
AI-suicide lawsuits also have another thing in common: they're brought against companies with endless amounts of money and huge incentives to avoid legal responsibility for anything involving their platforms and products. In short, they're going to defend themselves against any lawsuit with everything they've got. Having an experienced lawyer on your side is the only way to level the playing field.
Learn more about finding the right injury lawyer for you and your case.
Need a lawyer? Start here.