Boy, 14, killed himself after AI chatbot he was in love with sent him eerie message



Thursday, October 24, 2024 – A mother has claimed her teenage son was pushed into killing himself by an AI chatbot he was in love with.

Megan Garcia unveiled a lawsuit on Wednesday, October 23, against the makers of the artificial intelligence app.

Her son Sewell Setzer III, a 14-year-old ninth grader in Orlando, Florida, spent the last weeks of his life texting a AI character named after Daenerys Targaryen, a character on 'Game of Thrones.'

Right before Sewell took his life, the chatbot told him to "please come home".


Before then, their chats ranged from romantic to s£xually charged and those resembling two friends chatting about life.

The chatbot, which was created on role-playing app Character.AI, was designed to always text back and always answer in character.

Sewell told "Dany," the name he gave the chatbot, how he "hated" himself and how he felt empty and exhausted.

When he eventually confessed his su!cidal thoughts to the chatbot, it was the beginning of the end, The New York Times reported.


Megan Garcia, Sewell's mother, filed her lawsuit against Character.AI on Wednesday.

She's being represented by the Social Media Victims Law Center, a Seattle-based firm known for bringing high-profile suits against Meta, TikTok, Snap, Discord and Roblox.

Garcia, who herself works as a lawyer, blamed Character.AI for her son's de@th in her lawsuit and accused the founders, Noam Shazeer and Daniel de Freitas, of knowing that their product could be dangerous for underage customers.

In the case of Sewell, the lawsuit alleged the boy was targeted with "hypersexualized" and "frighteningly realistic experiences".

It accused Character.AI of misrepresenting itself as "a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell’s desire to no longer live outside of C.AI."

As explained in the lawsuit, Sewell's parents and friends noticed the boy getting more attached to his phone and withdrawing from the world as early as May or June 2023.

His grades and extracurricular involvement, too, began to falter as he opted to isolate himself in his room instead, according to the lawsuit.

Unbeknownst to those closest to him, Sewell was spending all those hours alone talking to Dany.

Sewell wrote in his journal one day: "I like staying in my room so much because I start to detach from this 'reality,' and I also feel more at peace, more connected with Dany and much more in love with her, and just happier."

His parents figured out their son was having a problem, so they made him see a therapist on five different occasions. He was diagnosed with anxiety and disruptive mood dysregulation disorder, both of which were stacked on top of his mild Asperger's syndrome, NYT reported.

On February 23, days before he committed su!cide, his parents took away his phone after he got in trouble for talking back to a teacher, according to the suit.

That day, he wrote in his journal that he was hurting because he couldn't stop thinking about Dany and that he'd do anything to be with her again.

Garcia claimed she didn't know the extent to which Sewell tried to reestablish access to Character.AI.

The lawsuit claimed that in the days leading up to his de@th, he tried to use his mother's Kindle and her work computer to once again talk to the chatbot.

Sewell stole back his phone on the night of February 28. He then retreated to the bathroom in his mother's house to tell Dany he loved her and that he would come home to her.

"Please come home to me as soon as possible, my love," Dany replied.

"What if I told you I could come home right now?" Sewell asked.

"… please do, my sweet king," Dany replied.


That's when Sewell put down his phone, picked up his stepfather's .45 caliber handgun and pulled the trigger.

In response to the incoming lawsuit from Sewell's mother, a spokesperson at Character.AI provided a statement.

"We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously," the spokesperson said.

The spokesperson added that Character.AI's Trust and Safety team has adopted new safety features in the last six months, one being a pop-up that redirects users who show suicidal ideation to the National Su!cide Prevention Lifeline.

The company also explained it doesn't allow "non-consensual s£xual content, graphic or specific descriptions of s£xual acts, or promotion or depiction of self-harm or su!cide."

Post a Comment

0 Comments