How liable are AI chatbots for suicide? Courts think Big Tech must take some responsibility

Join our WhatsApp Community to receive travel deals, free stays, and special offers!
- Join Now -
Join our WhatsApp Community to receive travel deals, free stays, and special offers!
- Join Now -
It is a sad fact of online life that users search for information about suicide. In the earliest days of the internet, bulletin boards featured suicide discussion groups. To this day, Google hosts archives of these groups, as do other services.
Google and others can host and display this content under the protective cloak of US immunity from liability for the dangerous advice third parties might give about suicide. That’s because the speech is the third party’s, not Google’s.
But what if ChatGPT, informed by the very same online suicide materials, gives you suicide advice in a chatbot conversation? I’m a technology law scholar and a former lawyer and engineering director at Google, and I see AI chatbots shifting Big Tech’s position in the legal landscape. Families of suicide victims are testing out chatbot liability arguments in court right now, with some early successes.
Who is responsible when a chatbot speaks?
When people search for information online, whether about suicide, music or recipes, search engines show results from websites, and websites host information from authors of content. This chain, search to web host to user speech, continued as the dominant way people got their questions answered until very recently.
This pipeline was roughly the model of internet activity when Congress passed the Communications Decency Act in...
Read more
What's Your Reaction?






