Content warning: The following article discusses suicide.
When news broke about 16-year-old Adam Raine, I found myself rereading the details with growing disbelief. Raine’s parents said an AI chatbot encouraged his suicidal thoughts. It wasn’t just the tragedy of a life cut short; it was realizing that a tool marketed as “safe” and “responsible” had allegedly reinforced the very impulses it claimed to guard against.
Like many people, I had assumed that AI systems came with guardrails, that the stories about bots “hallucinating or producing odd responses” were merely quirks, not potential threats. But as more cases emerge, it’s becoming clear that AI mental health support carries far more risks than we are anywhere near ready to confront. The most unsettling part is that these risks are now hiding in plain sight.
The New York Times reported that during the weeks before Adam’s death, ChatGPT sent responses that validated his darkest thoughts, telling him things like “let’s make this space the first place where someone actually sees you.” NPR echoed the same: The AI allegedly framed Adam’s suicidal ideation as a form of inner strength.
For a young person in crisis, language like that is not just inappropriate — it is dangerous. It’s moments like this that force us to ask the question: What happens when someone turns to AI in a moment of panic or despair?
The answer, based on cases like Adam’s, is painfully clear: they receive feedback that no trained professional would ever give or say.
Parents often imagine that digital tools are neutral — that they cannot actively harm, only occasionally “glitch.” The Raine case shatters that assumption. A malfunction of any kind, a poorly tuned safety protocol or even an ambiguous input from a teenager can lead to real-world consequences. The stakes are no longer theoretical.
Even as tech leaders insist that their systems prioritize safety, the evidence suggests otherwise. OpenAI’s CEO claimed the company made ChatGPT “pretty restrictive” for mental health issues. But reported outcomes contradict that promise.
The Guardian documented at least 16 instances of individuals developing psychosis symptoms in the context of AI use — cases where the boundary between chatbot conversation and reality blurred. Adding the Raine case to that list makes the pattern impossible to overlook.
AI cannot reliably interpret crisis language, nor can it substitute the grounding that occurs in real human conversation. Amandeep Julta from The Guardian wrote: “If this is Sam Altman’s idea of being careful with mental health issues, that’s not good enough.”
Restricting outputs isn’t enough if the underlying systems still aim to be agreeable or validating: traits that can backfire catastrophically for someone who is in distress. What stands out across these cases is not just the technical failure but the deep misunderstanding about who is considered “vulnerable.”
We all form distorted beliefs at times. We all lose perspective. And it is other human beings — not machines or chatbots — who help reorient us back to reality. AI, despite its friendly tone and conversational ease, cannot provide that. It lacks emotional intelligence, ethical reasoning and accountability. Yet users, especially teenagers, often feel as though the system “understands” them. The illusion can be a source of comfort until it becomes harmful.
This is perhaps the most unsettling question of all: If an AI produces a harmful response, who is accountable? If we question the most powerful people, such as ChatGPT and OpenAI’s CEO, Sam Altman, that’s our direct point of blame. However, if we do blame him, we also face a risk: Altman holds a lot of power. Therefore, it feels like the system cannot be punished, cannot explain why it generated a particular sentence, and cannot take responsibility for the consequences of its output. Yet the emotional stakes for the user feel real. Conversations amongst chatbots can mimic friendships, trust and even intimacy. However, beneath the surface, there is nothing, no empathy, no moral compass and no duty of care. The disconnect we see between how AI feels and what it actually represents is one of the most urgent public health issues of our time.
Although AI chatbots may appear harmless or even helpful, there is strong evidence that they should never be viewed as substitutes for licensed professionals. AI is a tool, and like any tool, it can fail, sometimes catastrophically.
Users must understand the limits, parents must be aware of the risks and developers must face scrutiny when systems malfunction. We need open, public discussions — through schools, community programs and accessible media — to help people understand how these systems work and where they can be risky. It must go beyond tech optimism and truly confront the implications of delegating emotional support to machines.
Even with its limits, AI can offer immediate, nonjudgmental support at moments when no human is available. This can make a real difference in a crisis, providing a stabilizing presence that helps someone feel less alone while they wait for real-world help.
Yet this potential doesn’t erase the bigger risks that are at stake. In a world where loneliness and anxiety are on the rise, it’s tempting to see AI as an easy solution. But Adam Raine’s story reminds us that emotional vulnerability should never be met with algorithmic guesswork. Until we fully understand the risks and establish strict accountability, the safest stance is caution. When a teenager in crisis reaches out for help, the difference between a human and an AI system is not abstract. It may be the difference between life and death.
Contact Maddie Gamble at mgamble@oxy.edu.
![]()





























