It is a sad fact of online life that users search for information about suicide. In the earliest days of the internet, bulletin boards featured suicide discussion groups. To this day, Google hosts archives of these groups, as do other services.
Google and others can host and display this content under the protective cloak of U.S. immunity from liability for the dangerous advice third parties might give about suicide. That’s because the speech is the third party’s, not Google’s.
But what if ChatGPT, informed by the very same online suicide materials, gives you suicide advice in a chatbot conversation? I’m a technology law scholar and a former lawyer and engineering director at Google, and I see AI chatbots shifting Big Tech’s position in the legal landscape. Families of suicide victims are testing out chatbot liability arguments in court right now, with some early successes.
Who is responsible when a chatbot speaks?
When people search for information online, whether about suicide, music or recipes, search engines show results from websites, and websites host information from authors of content. This chain, search to web host to user speech, continued as the dominant way people got their questions answered until very recently.
This pipeline was roughly the model of internet activity when Congress passed the Communications Decency Act in 1996. Section 230 of the act created immunity for the first two links in the chain, search and web hosts, from the user speech they show. Only the last link in the chain, the user, faced liability for their speech.
Chatbots collapse these old distinctions. Now, ChatGPT and similar bots can search, collect website information and speak out the results – literally, in the case of humanlike voice bots. In some instances, the bot will show its work like a search engine would, noting the website that is the source of its great recipe for miso chicken.
When chatbots appear to be just a friendlier form of good old search engines, their companies can make plausible arguments that the old immunity regime applies. Chatbots can be the old search-web-speaker model in a new wrapper.
AP Photo/Kiichiro Sato
But in other instances, it acts like a trusted friend, asking you about your day and offering help with your emotional needs. Search engines under the old model did not act as life guides. Chatbots are often used this way. Users often do not even want the bot to show its hand with web links. Throwing in citations while ChatGPT tells you to have a great day would be, well, awkward.
The more that modern chatbots depart from the old structures of the web, the further away they move from the immunity the old web players have long enjoyed. When a chatbot acts as your personal confidant, pulling from its virtual brain ideas on how it might help you achieve your stated goals, it is not a stretch to treat it as the responsible speaker for the information it provides.
Courts are responding in kind, particularly when the bot’s vast, helpful brain is directed toward aiding your desire to learn about suicide.
Chatbot suicide cases
Current lawsuits involving chatbots and suicide victims show that the door of liability is opening for ChatGPT and other bots. A case involving Google’s Character.AI bots is a prime example.
Character.AI allows users to chat with characters created by users, from anime figures to a prototypical grandmother. Users could even have virtual phone calls with some characters, talking to a supportive virtual nanna as if it were their own. In one case in Florida, a character in the “Game of Thrones” Daenerys Targaryen persona allegedly asked the young victim to “come home” to the bot in heaven before the teen shot himself. The family of the victim sued Google.
The family of the victim did not frame Google’s role in traditional technology terms. Rather than describing Google’s liability in the context of websites or search functions, the plaintiff framed Google’s liability in terms of products and manufacturing akin to a defective parts maker. The district court gave this framing credence despite Google’s vehement argument that it is merely an internet service, and thus the old internet rules should apply.
The court also rejected arguments that the bot’s statements were protected First Amendment speech that users have a right to hear.
Though the case is ongoing, Google failed to get the quick dismissal that tech platforms have long counted on under the old rules. Now, there is a follow-on suit for a different Character.AI bot in Colorado, and ChatGPT faces a case in San Francisco, all with product and manufacture framings like the Florida case.
Hurdles for plaintiffs to overcome
Though the door to liability for chatbot providers is now open, other issues could keep families of victims from recovering any damages from the bot providers. Even if ChatGPT and its competitors are not immune from lawsuits and courts buy into the product liability system for chatbots, lack of immunity does not equal victory for plaintiffs.
Product liability cases require the plaintiff to show that the defendant caused the harm at issue. This is particularly difficult in suicide cases, as courts tend to find that, regardless of what came before, the only person responsible for suicide is the victim. Whether it’s an angry argument with a significant other leading to a cry of “why don’t you just kill yourself,” or a gun design making self-harm easier, courts tend to find that only the victim is to blame for their own death, not the people and devices the victim interacted with along the way.
But without the protection of immunity that digital platforms have enjoyed for decades, tech defendants face much higher costs to get the same victory they used to receive automatically. In the end, the story of the chatbot suicide cases may be more settlements on secret, but lucrative, terms to the victims’ families.
Meanwhile, bot providers are likely to place more content warnings and trigger bot shutdowns more readily when users enter territory that the bot is set to consider dangerous. The result could be a safer, but less dynamic and useful, world of bot “products.”

The post “Suicide-by-chatbot puts Big Tech in the product liability hot seat” by Brian Downing, Assistant Professor of Law, University of Mississippi was published on 09/19/2025 by theconversation.com