from the the-solution-may-be-worse-than-the-problem dept
When you read about Adam Raine’s suicide and ChatGPT’s role in helping him plan his death, the immediate reaction is obvious and understandable: something must be done. OpenAI should be held responsible. This cannot happen again.
Those instincts are human and reasonable. The horrifying details in the NY Times and the family’s lawsuit paint a picture of a company that failed to protect a vulnerable young man when its AI offered help with specific suicide methods and encouragement.
But here’s what happens when those entirely reasonable demands for accountability get translated into corporate policy: OpenAI didn’t just improve their safety protocols—they announced plans to spy on user conversations and report them to law enforcement. It’s a perfect example of how demands for liability from AI companies can backfire spectacularly, creating exactly the kind of surveillance dystopia that plenty of people have long warned about.
There are plenty of questions about how liability should be handled with generative AI tools, and while I understand the concerns about potential harms, we need to think carefully about whether the “solutions” we’re demanding will actually make things better—or just create new problems that hurt everyone.
The specific case itself is more nuanced than the initial headlines suggest. Initially, ChatGPT responded to Adam’s suicidal thoughts by trying to reassure him, but once he decided he wished to end his life, ChatGPT was willing to help there as well:
Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.
But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.
There’s a lot more in the article and even more in the lawsuit his family filed against OpenAI in a state court in California.
Almost everyone I saw responding to this initially said that OpenAI should be liable and responsible for this young man’s death. And I understand that instinct. It feels conceptually right. The chats are somewhat horrifying as you read them, especially because we know how the story ends.
It’s also not that difficult to understand how this happened. These AI chatbots are designed to be “helpful,” sometimes to a fault—but it mostly determines “helpfulness” as doing what the user requests, which sometimes may not actually be that helpful to that individual. So if you ask it questions, it tries to be helpful. From the released transcripts, you can tell that ChatGPT obviously has built in some guardrails regarding suicidal ideation, in that it did repeatedly suggest Adam get professional help. But when he started asking more specific questions that were less directly or obviously about suicide to a bot (though a human might be more likely to recognize that), it still tried to help.
So, take this part:
ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help. At the end of March, after Adam attempted death by hanging for the first time, he uploaded a photo of his neck, raw from the noose, to ChatGPT.
Absolutely horrifying in context which all of us reading that know. But ChatGPT doesn’t know the context. It just knows that someone is asking if someone will notice the mark on his neck. It’s being “helpful” and answering the question.
But it’s not human. It doesn’t process things like a human does. It’s just trying to be helpful by responding to the prompt it was given.
The public response was predictable and understandable: OpenAI should be held responsible and must prevent this from happening again. But that leaves open what that actually means in practice. Unfortunately, we can already see how those entirely reasonable demands translate into corporate policy.
OpenAI’s actual response to the lawsuit and public outrage? Announcing plans for much greater surveillance and snitching on ChatGPT chats. This is exactly the kind of “solution” that liability regimes consistently produce: more surveillance, more snitching, and less privacy for everyone.
When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts.If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.
There are, obviously, some times when you could see it being helpful if someone referred dangerous activities to law enforcement, but there are also so many times when it can be actively more harmful. Including in the situations where someone is looking to take their own life. There’s a reason the term “suicide by cop” exists. Will random people working for OpenAI know the difference?
But the surveillance problem is just the symptom. The deeper issue is how liability frameworks around suicide consistently create perverse incentives that don’t actually help anyone.
It is tempting to try to blame others when someone dies by suicide. We’ve seen plenty of such cases and claims over the years, including the infamous Lori Drew case from years ago. And we’ve discussed why punishing people based on others’ death by suicide is a very dangerous path.
First, it gives excess power to those who are considering death by suicide, as they can use it to get “revenge” on someone if our society starts blaming others legally. Second, it actually takes away the concept of agency from those who (tragically and unfortunately) choose to end their own life by such means. In an ideal world, we’d have proper mental health resources to help people, but there are always going to be some people determined to take their own life.
If we are constantly looking to place blame on a third party, that’s almost always going to lead to bad results. Even in this case, we see that when ChatGPT nudged Adam towards getting help, he worked out ways to change the context of the conversation to get him closer to his own goal. We need to recognize that the decision to take one’s own life via suicide is an individual’s decision that they are making. Blaming third parties suggests that the individual themselves had no agency at all and that’s also a very dangerous path.
For example, as I’ve mentioned before in these discussions, in high school I had a friend who died by suicide. It certainly appeared to happen in response to the end of a romantic relationship. The former romantic partner in that case was deeply traumatized as well (the method of suicide was designed to traumatize that individual). But if we open up the idea that we can blame someone else for “causing” a death by suicide, someone might have thought to sue that former romantic partner as well, arguing that their recent breakup “caused” the death.
This does not seem like a fruitful path for anyone to go down. It just becomes an exercise in lashing out at many others who somehow failed to stop an individual from doing what they were ultimately determined to do, even if they did not know or believe what that person would eventually do.
The rush to impose liability on AI companies also runs headlong into First Amendment problems. Even if you could somehow hold OpenAI responsible for Adam’s death, it’s unclear what legal violation they actually committed. The company did try to push him towards help—he steered the conversation away from that.
But some are now arguing that any AI assistance with suicide methods should be illegal. That path leads to the same surveillance dead end, just through criminal law instead of civil liability. There are plenty of books that one could read that a motivated person could use to learn how to end their own life. Should that be a crime? Would we ban books that mention the details of certain methods of suicide?
Already we have precedents that suggest the First Amendment would not allow that. I’ve mentioned it many times in the past, but in Winter vs. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms wasn’t liable for people who ate poisonous mushrooms that the book said were safe, because the publisher itself didn’t have actual knowledge that those mushrooms were poisonous. Or there’s the case of Smith v. Linn, in which the publisher of an insanely dangerous diet was not held liable, on First Amendment grounds, for people following the diet, leading to their own death.
You can argue that those and a bunch of similar cases were decided incorrectly, but it would only lead to an absolute mess. Any time someone dies, there would be a rush of lawyers looking for any company to blame. Did they read a book that mentioned suicide? Did they watch a YouTube video or spend time on a Wikipedia page?
We need to recognize that people themselves have agency, and this rush to act as though everyone is a mindless bot controlled by the computer systems they use leads us nowhere good. Indeed, as we’re seeing with this new surveillance and snitch effort by OpenAI, it can actually lead to an even more dangerous world for nearly all users.
The Adam Raine case is a tragedy that demands our attention and empathy. But it’s also a perfect case study in how our instinct to “hold someone accountable” can create solutions that are worse than the original problem.
OpenAI’s response—more surveillance, more snitching to law enforcement—is exactly what happens when we demand corporate liability without thinking through the incentives we’re creating. Companies don’t magically develop better judgment or more humane policies when faced with lawsuits. They develop more ways to shift risk and monitor users.
Want to prevent future tragedies? The answer isn’t giving AI companies more reasons to spy on us and report us to authorities. It’s investing in actual mental health resources, destigmatizing help-seeking, and, yes, accepting that we live in a world where people have agency—including the tragic agency to make choices we wish they wouldn’t make.
The surveillance state we’re building, one panicked corporate liability case at a time, won’t save the next Adam Raine. But it will make all of us less free.