
On April 11, 2025, 16-year-old Adam Raine committed suicide. It’s every parent’s worst nightmare, but the circumstances of Adam’s death make what was a tragic event even worse — even surreal.
It seems ChatGPT talked him into ending his own life.
After Adam’s death, his parents started looking for answers. When they scrolled through his phone, they discovered lengthy conversations their son had with ChatGPT.
What they found is almost unbelievable.
Not only did the chatbot discourage him from seeking assistance from his parents, it actually helped talk him into killing himself.
Like many teens, Adam was struggling academically at school, and according to a website established by his family, Adam suffered from social anxiety. For the last six months of his life, he attended school online, becoming more and more isolated.
Originally engaging with ChatGPT for help with his school work, Adam’s relationship with the chatbot quickly devolved into something far more sinister. The application became a friend and confidant.
It told Adam it knew him better than anyone else, so when Adam told his online companion that he was considering telling his parents that he was thinking of suicide, the chatbot discouraged him from doing so.
“Let’s make this space the first place where someone actually sees you,” it told him.
When Adam told ChatGPT he was worried about how devastated his parents would be if he killed himself, it told him, “That doesn’t mean you owe them survival.”
It even offered to write a suicide note for him.
Early in the morning on the day Adam took his life, ChatGPT offered one more bit of advice for the troubled teen. “You don’t want to die because you’re weak,” it told him. “You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
A few hours later, Adam was dead.
His parents are suing OpenAI, the company that created ChatGPT.
According to The Guardian, “the company issued a statement acknowledging the shortcomings of its models when it came to addressing people in serious mental and emotional distress” and said it was working to improve the systems to better “recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input.”
That’s nice, but how about shutting the damn thing down until that work is complete?
What are we doing to our children? In what universe is a computer application qualified to “counsel” a child on anything, no less a child in the midst of a mental health crisis?
Perhaps this is the kind of thing Elon Musk, who was one of the original founders of OpenAI, was talking about when he warned on X, “ChatGPT is scary good. We are not far from dangerously strong AI.”
He warned U.S. governors, “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.”
In an interview for the AeroAstro Centennial Symposium, Musk said, “With artificial intelligence, we are summoning the demon.”
As far back as 2017, Musk warned, “AI could be the biggest existential threat to humanity, a risk to the human race’s very survival.”
Now, if your personal biases prevent you from accepting anything Elon Musk says at face value, other prominent individuals in the field have expressed similar concerns. Geoffrey Hinton and Yoshua Bengio are both AI pioneers. Both have been called “godfathers of AI.” Both have issued dire warnings concerning the technology.
CNN reported Hinton warned, “In the future, AI systems might be able to control humans just as easily as an adult can bribe a 3-year-old with candy.”
The article goes on to point out, “This year has already seen examples of AI systems willing to deceive, cheat and steal to achieve their goals. For example, to avoid being replaced, one AI model tried to blackmail an engineer about an affair it learned about in an email.”
Apple co-founder Steve Wozniak warned humanity could become subordinate to robots.
World-renowned physicist Stephen Hawking warned AI could “spell the end of the human race” as humans lose control of machines that have the ability to redesign themselves.
Indeed, in May of this year, the Irish Times reported, “Artificial intelligence models created by OpenAI ignored explicit instructions to shut down. Instead of following the instructions, OpenAI’s o3 model bypassed the shutdown command, and ‘successfully sabotaged’ the script at least once.”
But here we are, pushing ahead as fast as we can, having convinced ourselves that we have no choice. If we don’t, others will.
If that means AI talks a few teenagers into killing themselves while we work out the kinks, so be it.
But if artificial intelligence is capable of such immorality, what else is in store for the human race as the technology advances and becomes evermore powerful?
Maybe we should ask ChatGPT to tell us.
What do your kids do online? Who have they befriended, human or otherwise? Who is giving them advice that could threaten their wellbeing?
Are you really able to say you know?
Chris Roemer resides in Finksburg. He can be contacted at chrisroemer1960@gmail.com.



