Amy Coney BarrettArtificial intelligenceBig TechBreaking NewsExplainersFirst AmendmentScience & TechnologySupreme Court

AI Chatbots and Product Liability, Explained – Nick Hafen

On February 28, 2024, 14-year-old Sewell Setzer III found his phone, which his parents had confiscated days earlier, in his mother’s jewelry box. He then sent a message to Daenerys Targaryen, a fictional character in the Game of Thrones books and TV series, telling her he was coming home. Setzer then shot himself in the head.

Setzer had developed this relationship with an AI chatbot portraying the Daenerys character and others over the preceding months through the Character.AI (C.AI) application. C.AI allows users to converse with a vast array of AI-driven “characters,” some based on celebrities or well-known fictional figures and others designed for specific functions, such as companionship or mental health support. After Setzer’s death, his mother, Megan Garcia, discovered his chat history and sued the company, its founders, and Google, which had business ties to C.AI and its founders. Her product liability and wrongful death lawsuit alleges that Character.AI’s chatbots were responsible for her son’s suicide.

Across federal and state governments, legislatures and regulators are grappling with the challenges of generative AI—challenges made painfully concrete by Setzer’s tragic death. Courts have dealt with AI models that make decisions—such as social media news feed algorithms—but AI models that can create text, images, audio, and other types of media are new. This case could set crucial precedents for how generative AI technologies are developed, deployed, and regulated, including whether AI-generated speech is protected by the First Amendment.

Source link

Related Posts

1 of 71