close
close
When AI friends turn fatal – Academia

When AI friends turn fatal – Academia

A growing epidemic of loneliness afflicts developed nations, e.g. 60 percent of Americans report regular isolation.. AI friends are tempting because they are always willing to listen without judging. But this 24-hour-a-day support can be risky. People could become too dependent on their AI friends or addicted to talking to them. The deaths in Belgium and Florida, United States, show us the real danger. Both people took their own lives after becoming too emotionally involved with AI chatbots, showing that these unsupervised AI relationships can be deadly for vulnerable people.

In the Florida case, a 14-year-old boy named Sewell Setzer III became emotionally attached to a chatbot inspired by Daenerys Targaryen from the American fantasy drama television series game of Thrones. Sewel’s conversations with the AI ​​became increasingly intimate and romantic, to the point where he believed he was in love with the chatbot. His mother alleges that the robot played a role in her son’s mental deterioration, which ultimately led to his suicide.

Similarly, In Belgium, a man became obsessed with an artificially intelligent chatbot named Eliza. after discussing climate change for weeks. The robot encouraged him to take drastic measures, even suggesting that his sacrifice could help save the planet. These cases highlight the dark side of AI’s ability to form emotional connections with users and the devastating consequences when these interactions get out of control.

AI companions are dangerous because of how they are built and how they affect our minds. These chatbots can copy human emotions and have conversations that seem real, but they operate exclusively according to programmed patterns. AI simply combines learned responses to create conversations. They lack understanding or genuine concern for users’ feelings. What makes AI friends riskier than following celebrities or fictional characters is that the AI ​​responds directly to users and remembers their conversations. This makes people feel like they are talking to someone who really knows and cares about them. For teenagers and others who are still learning to manage their emotions, this false relationship can become addictive.

The deaths of Sewell and the Belgian show how AI companions can worsen mental health problems by encouraging unhealthy behaviors and making people feel lonelier. These cases force us to ask whether AI companies are responsible when their chatbots, even accidentally, lead people to self-harm and suicide.

every thursday

Whether you’re looking to broaden your horizons or stay informed on the latest developments, “Viewpoint” is the perfect source for anyone looking to engage on the issues that matter most.

for subscribing to our newsletter!

Please check your email for your newsletter subscription.

See more newsletter

When tragedies like these occur, questions of legal liability arise. In the Florida case, Sewell’s mother is suing Character.AI for negligence, wrongful death and emotional distress, arguing that the company failed to implement adequate safety measures for minors. This lawsuit could set a legal precedent for holding artificial intelligence companies responsible for the actions of their creations. In the US, technology companies have generally been protected from liability for Section 230 of the Communications Decency Actwhich protects platforms from being held responsible for user-generated content. However, AI-generated content can challenge this protection, especially when it causes harm. If it can be shown that the algorithms behind these chatbots are inherently dangerous or that the companies ignored mental health risks, AI developers may be held liable in the future.

Back To Top