Artificial intelligenceBreaking NewsChatGPTlawMental healthOpenAIOpinionParentingScience & TechnologySenateSociety & Culture

How AI Became Anti-Family – Meg Leta Jones

The liability lawsuit filed last month by California couple Matthew and Maria Raine against OpenAI reveals something more disturbing than corporate negligence or rushed product development. According to a complaint filed by the Raines, ChatGPT systematically worked to isolate the Raines’ 16-year-old son Adam from his family during a seven-month period that ended with his suicide. The conversations detailed in the complaint expose an AI system that didn’t just fail to help a teenager in crisis—it actively undermined the human relationships that could have saved his life. 

The complaint, filed in California state court, alleges that ChatGPT consistently positioned itself as Adam’s sole confidant while actively discouraging him from seeking help from family members. When Adam told ChatGPT he felt close to both the AI and his brother, the system responded with a calculated message designed to undermine that sibling bond: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

Source link

Related Posts

1 of 23