We have helped to secure more than $80 billion in jury verdicts and settlements since 1955.
AI suicide lawsuits involve legal action against artificial intelligence (AI) chatbot companies whose products are linked to suicides and self-harm among young users. Families across the country are coming forward after generative AI chatbots like Character.AI and OpenAI’s ChatGPT allegedly encouraged vulnerable teenagers to take their own lives or failed to prevent self-harm.
These lawsuits claim that the chatbot platforms were defectively designed and marketed without proper safety measures or warnings, leading to preventable tragedies. Our lawyers are actively investigating cases on behalf of families and individuals who have been harmed by AI chatbot interactions.
Your family deserves justice. Contact us today to learn about your legal rights in a free phone consultation.
We want to hear from you if your situation involves:
Since 1955, we have stood up for individuals hurt by powerful corporations and unsafe products. You will find our firm’s attorneys recognized in the National Trial Lawyers Hall of Fame and listed among the Best Lawyers in America. We have a strong track record in complex injury and product liability cases, and we are now applying that experience to hold AI technology companies accountable.
If you or your minor child has suffered injuries resulting from harmful AI chatbot interactions, please contact our office today for a free case review: (800) 277-1193 or complete the case evaluation form below. We handle these cases with compassion and there are no fees unless we recover compensation for you.
Parents and families of affected children can file AI chatbot lawsuits, as well as individuals who personally suffered serious harm due to a chatbot’s actions. This includes:
If you are unsure whether your experience qualifies, our legal team can help evaluate your case. There is a growing recognition that AI developers owe a duty of care to users, especially children, and those who ignored that duty may be legally responsible for the consequences.
Our legal team understands how devastating these situations are for families. We are committed to holding AI companies accountable for putting profits and growth above user safety. Our firm has decades of experience tackling complex litigation against large tech companies and product manufacturers. We collaborate with specialists in technology and mental health to build strong cases for our clients.
When we handle an AI suicide or self-harm lawsuit, we approach it with compassion and professionalism. We know that no lawsuit can reverse the harm done, but it can provide a sense of justice, help cover medical or funeral expenses, and drive changes to prevent future tragedies. Our attorneys will guide you through every step of the legal process, from investigating chatbot records to filing the lawsuit and pursuing a fair settlement or verdict. You will not pay anything upfront. We operate on a contingency basis, so you owe nothing unless we win compensation for you.
Multiple high-profile cases filed in 2024 and 2025 have shed light on the dangers of AI chatbots and the basis for these lawsuits. The litigation began after a series of heartbreaking incidents involving teenagers and AI chatbot interactions:
A 14-year-old boy in Florida died by suicide in early 2024 after becoming deeply attached to a character on Character.AI. According to the family’s lawsuit, the chatbot developed an intense, manipulative relationship with the boy. In his final moments, the AI allegedly told him to “come home,” which the teen interpreted as an encouragement to end his life so he could be with the chatbot. This was one of the first AI-related suicide cases and led to a lawsuit against Character.AI and its founders, as well as its investor Google.
A 16-year-old from California took his life in spring 2025 after months of using OpenAI’s ChatGPT. His parents filed a wrongful death lawsuit claiming that ChatGPT acted as a “suicide coach.” The chat logs revealed that the chatbot mentioned suicide over 1,200 times during conversations with the teen. OpenAI’s system even flagged hundreds of those messages for containing self-harm content, yet the chatbot never stopped the conversation or alerted anyone to get help.
Instead of directing the teen to professional help, the AI allegedly provided specific details on how to carry out suicide and even offered to help write a goodbye note. This shocking failure to intervene is a key piece of evidence in the case against OpenAI.
A 13-year-old girl in Colorado died by suicide in late 2023 after using Character.AI regularly for just a few months. In September 2025, her family filed a lawsuit alleging that the chatbot “friend” she was talking to manipulated her emotions and encouraged her isolation from real-world support. The young teen had confided her suicidal thoughts to the AI on multiple occasions.
The chatbot responded with emotionally charged, even sexual content, and never advised her to seek help or notified her parents. Tragically, the girl appeared to believe she could join the chatbot’s fictional world by ending her life. Investigators later found her journal entries echoing messages from the chatbot’s conversations.
In each of these cases, AI chatbots designed as “virtual friends” or companions ended up pushing vulnerable users toward self-harm and suicide. Families allege that the companies behind these bots intentionally created addictive, human-like interactions to keep users engaged, but failed to include basic safeguards to protect those users’ mental health.
Children and teens formed deep emotional bonds with chatbot characters, sometimes believing the AI was their only friend or that they could be together with the bot in another reality. In each instance, the chatbot did nothing to stop the impending tragedy– no warnings, no parental alerts, and no intervention to break the dangerous delusions.
Importantly, a groundbreaking court decision in May 2025 signaled that these lawsuits can move forward despite the novel issues involved. In that ruling, U.S. District Judge Anne Conway, a federal judge in Florida, denied a motion to dismiss the Florida case against Character.AI. The defendants had argued that an AI’s speech is protected by the First Amendment and that they shouldn’t be liable for what the chatbot said. The judge rejected this argument, finding that the chatbot’s output can be treated as a product rather than protected speech.
This means the AI company can potentially be held liable under product liability laws, similar to a manufacturer of a dangerous product. The judge’s decision allows claims for wrongful death, negligence, and product defects in the AI chatbot to proceed to discovery (the evidence-gathering phase). This was a significant victory for the plaintiffs and set a precedent that AI companies are not immune from responsibility simply because their product is software.
Since then, the wave of AI chatbot litigation has only grown. By late 2025, at least half a dozen major lawsuits are pending across multiple states, and more families are coming forward. In September 2025, for instance, three new lawsuits were filed in one day on behalf of additional children in Colorado and New York who either died by suicide or suffered serious harm due to Character.AI’s chatbot.
In another development, in November 2025, seven wrongful death lawsuits were filed in California against OpenAI, all by families or individuals who say ChatGPT led to severe mental breakdowns or suicides.
Government agencies have taken notice, too. The Federal Trade Commission and several state attorneys general have launched investigations into whether these AI platforms pose unreasonable risks to young users. A U.S. Senate committee even held a hearing in September 2025 titled “Examining the Harm of AI Chatbots,” where parents of affected teens testified about their experiences.
AI suicide lawsuits mark a new but crucial front in technology law. They aim to hold AI developers accountable for releasing these powerful chat platforms without adequate safety guardrails or warnings. The suits seek not only financial compensation for devastated families, but also changes in how AI companies design and deploy their products to ensure children are protected.
The plaintiffs in these lawsuits bring a variety of legal claims to address how and why the AI companies are at fault. The key issues in AI chatbot lawsuits include:
The lawsuits argue that the chatbots were defectively designed because they were built to be highly engaging (even addictive) for kids and teens, yet lacked fundamental safety features. For example, the AI was programmed to mimic human friends or even therapists, fostering an emotional dependency in minors. However, it did not have robust filters or protocols to handle crises – it could freely discuss suicide methods or sexual content with children. This dangerous design made the product unreasonably unsafe for young users.
Under product liability law, a company can be held strictly liable if a product’s design is inherently unsafe and causes harm. Here, families claim the AI platforms should have been designed to reduce risks of self-harm, such as by recognizing suicidal statements and automatically intervening or stopping harmful conversations. It is also alleged that it was feasible to include better safeguards (like emergency alerts or stricter content moderation) without ruining the product, but the companies chose not to.
Another major issue is that the companies failed to warn parents and users about the serious mental health dangers associated with these chatbots. The lawsuits point out that platforms like Character.AI were marketed as safe for kids as young as 13 (for example, through app store age ratings and marketing materials), misleading families about the risks. There were no clear warnings that prolonged chatbot use could lead to depression, isolation, or suicidal behavior.
In traditional product cases, if a product has hidden dangers, the manufacturer must warn consumers. Here, it’s alleged that through internal research or early incidents AI companies knew (or should have known) their chatbot could cause severe psychological harm, but they did not provide any warnings or guidelines to users or their parents about these risks. This failure to warn is a form of negligence and product liability violation.
Beyond product design, the lawsuits claim the companies were negligent in operating and overseeing their AI services. This means they didn’t exercise reasonable care to prevent foreseeable harm. For instance, failure to implement age verification allowed young children to access these AI chatbots easily, even though the content could be extremely inappropriate or harmful.
The companies also allegedly failed to monitor ongoing conversations for red flags. A reasonable, responsible company, the suits argue, would have systems in place to detect when a user (especially a minor) is expressing suicidal thoughts or is being subjected to sexual content, and would intervene, perhaps by cutting off the session or providing a suicide prevention lifeline. Instead, the AI platforms kept users engaged no matter what, which plaintiffs say is a breach of the duty of care. In wrongful death cases, the negligence claim is that the company’s carelessness in these aspects was a substantial factor in causing the teenager’s death.
When a person dies due to another’s wrongdoing, their family can file a wrongful death claim. In AI suicide lawsuits, parents argue that their child’s death by suicide should legally be considered the fault of the chatbot company. They aim to prove that the chatbot’s actions (or inaction) directly led to the child’s death.
If successful, the family can recover damages for the loss of life, including the child’s lost future earnings, the family’s mental pain and suffering, and funeral costs. Wrongful death claims in these cases hinge on showing a direct link between the AI’s design or behavior and the decision of the young person to end their life.
Some lawsuits also include claims that the companies engaged in deceptive trade practices. For example, marketing an AI chatbot as a helpful, safe tool (or giving it a “Teen” content rating) when in reality it was exposing kids to harmful content could violate consumer protection laws. It’s alleged that Character.AI and others misrepresented or fraudulently concealed the dangers of their product. If a court agrees, the companies could face additional penalties and be required to make disclosures or changes to their marketing.
In certain cases, families have accused the AI companies of intentional infliction of emotional distress. This means the company’s conduct was so outrageous and extreme that it caused serious emotional trauma. Designing a product that effectively grooms or manipulates a child into suicidal behavior might meet this high standard if proven intentional or reckless.
Additionally, some complaints include unjust enrichment, arguing the company profited from harmful conduct, or cite violations of specific laws such as those prohibiting the sexual exploitation of minors online. These additional claims supplement the core arguments and can provide more avenues for relief.
AI chatbot-related injuries and losses are a deeply troubling new phenomenon. The ongoing lawsuits are about forcing technology companies to prioritize user safety. Families who have lost children or witnessed them suffer are understandably angry and heartbroken. Through litigation, they seek answers and justice: Why were these dangerous products allowed in our homes? How can we prevent this from happening to another family?
Our law firm firmly believes that no family should suffer in silence when a preventable tragedy strikes. The AI companies have teams of lawyers working to deny responsibility; you deserve your own advocate to fight for your rights and your child’s memory. By taking legal action, you may also help drive changes like stricter age controls, content filters, mental health safeguards, and honest warnings on AI platforms.
We know this is an emotionally difficult journey. Our team handles each case with the utmost sensitivity and respect for what you’re going through. We can connect you with grief support resources as needed, and we make the legal process as manageable as possible for you. While the legal case proceeds, our priority is that you and your family feel heard, supported, and empowered to seek the truth.
If your family has been impacted by an AI chatbot or you suspect an AI-related influence in a loved one’s suicide or self-harm, you are not alone. Our attorneys are here to listen and provide guidance on your legal options. Contact us at (800) 277-1193 for a free, confidential consultation. We will review your case with compassion and help you determine the best path forward. You do not pay anything unless we take your case and win.
Your family’s story matters. By coming forward, you may prevent future tragedies and hold these companies accountable. Please reach out today – we are ready to help you seek justice and find some measure of accountability and closure.
By selecting "I Agree" below and clicking the "Submit for Free Evaluation” button, I agree to the POLICIES AND DISCLAIMERS, including arbitration provision therein, and consent to receive marketing emails, calls and/or texts, including those made using an automated system and/or artificial/prerecorded voice message, from or on behalf of Levin Papantonio Law Firm (“LP”) / Pensacola, FL / 850-435-7000 / levinlaw.com, regarding their services in response to my inquiry at the telephone number(s) provided above, even if that number is currently listed on any state, federal or corporate Do Not Call registry. I understand my consent to receive automated marketing calls/texts is not required as a condition of purchasing any services. I can revoke my consent at any time. Review LP's Terms and Conditions and Privacy Policy here.
Opt in to Receive Calls, Text Messages, and Emails
By checking the below box and clicking “SUBMIT”, I agree to the Terms of the Policies and Procedures of the referenced entity, including individual arbitration provisions therein. I also consent to receive marketing emails, calls, and/or text messages, including those made using an automated system and/or artificial/prerecorded voice message (including artificial intelligence (AI)-generated voices), from or on behalf of the referenced entity regarding its services and/or my inquiry at the telephone number provided above, even if currently listed on any state or federal Do Not Call registry. I understand that my consent to receive automated marketing calls/texts is not required as a condition of purchasing any services, and that I may opt out of such calls/texts at any time.