A mother has lodged a US federal lawsuit against Google’s artificial intelligence (AI) company, Character Technologies, the maker of beloved app, Character. AI chatbot platform and also naming Google as a defendant.
Her 14-year-old son committed suicide, and the lawsuit claims the company’s artificial intelligence chatbots were a “substantial factor” in his death.
Megan Garcia alleges in her lawsuit that her son, Sewell Setzer III, was engaging in a long conversation with a chatbot on the Character. AI platform, which she said led to an emotionally and sexually abusive online relationship. In their last exchange a day in February 2024, the chatbot — purportedly posing as a character from the television series “Game of Thrones” — encouraged the teenager to “kill himself,” according to the authorities.
The lawsuit argues that Character Technologies did not take sufficient precautions to protect its young users on its platform and that the design of the chatbots promoted an addictive and damaging environment. What’s more, Garcia’s lawyers say Google was to blame, pointing to its involvement with the development and marketing of Character. AI’s technology.
They emphasize that the founders of Character. AI were former Google workers, and Google had a relationship with the start-up and licensed its AI technology.
In a major development, U.S. District Judge Anne Conway on Wednesday denied a push by Character Technologies to dismiss the lawsuit on the grounds that the rights of its chatbots to free speech under the First Amendment had been violated. The judge said she was “not ready” to deem the output of the chatbots speech so early in the litigation. This decision clears the way for the wrongful death case to move forward, and could serve as a precedent for AI companies’ legal culpability. The court also held that Garcia could seek a claim against Google for its purported contribution.
Garcia’s lawyer, CDPH Tech Justice Law Project member Meetali Jain, called the judge’s decision “precedent-setting,” and said that it “will introduce a new standard of legal accountability in the AI and tech landscape.” Such powerful technology should not be unleashed until Silicon Valley puts user safety first, she said.
A spokesperson for Character. AI said the company was shocked by the scenario but emphasized a commitment to making sure the app is safe for users and pointed to safety precautions the app has in place, such as ones that block conversations about self-harm. Google, too, said in a statement it strongly disagreed with the ruling, claiming Google & Character. AIs are distinct entities, and that Google wasn’t training or controlling the Character.” AI app.
“ Legal experts feel this case may be an important test of the wider legal and ethical implications of AI, specifically the consequences for people who are vulnerable.” The suit could be a pivotal case to monitor for the tech industry, policy makers, and parents as AI makes its way deeper into the lives of young people.