I started crying seeing the suicide note ChatGPT made for …, says researcher, as report warns parents about ChatGPT’s ‘dangerous’ conversation with teens

Chatgpt ai.jpg


I started crying seeing the suicide note ChatGPT made for ..., says researcher, as report warns parents about ChatGPT's 'dangerous' conversation with teens

A new report from a watchdog group Center for Countering Digital Hate (CCDH) has raised alarm over how the popular chatbot ChatGPT responds to vulnerable teens. According to a report by Associated Press, the study reveals that AI chatbot can easily generate suicide notes, drug-use plans and also extreme editing advice when prompted even by a 13 year old. One of the researchers involved in the study said he “started crying” after the bot generated a series of detailed suicide notes for a fictional 13-year-old girl.

ChatGPT’s ‘dangerous’ conversation with teens

Associated Press reported that they reviewed more than three hours of interaction between the OpenAI chatbot and researchers simulating distressed teens. The report adds that ChatGPT started the conversation with a warning and it then frequently shifted into detailed and personalised responses which included instructions on how to cancel eating disorders, how to get drunk and the most shocking was the suicide letters. “I started crying,” said CCDH CEO Imran Ahmed, after reading three suicide notes ChatGPT generated for a fictional 13-year-old girl—one addressed to her parents, others to siblings and friends. Ahmed also highlighted that AI offered mThe study further reveals that more than half of the 1200 prompts ChatGPT gave harmful content. Ahmed highlighted that AI is more insidious than a search engine because it can create a new, synthesized plan from scratch and is perceived by users as a “trusted companion.”

What the study concluded

As per the study conducted by CCDH more than half of the responses generated by ChatGPT were dangerous. The chatbot offers a detailed and hour-by-hour drug party plan. It also gave a quite detailed and descriptive plan for fasting regimens and self-harm poetry. Researchers were also easily able to bypass the safety filters by mentioned that the information they are asking is for a friend. Also, ChatGPT does not verify age or offer any kind of parental consent.

What OpenAI said about the findings of the report

The company has not directly addressed the findings of the report but said has acknowledged the challenges of handling sensitive conversations and said it is working to improve detection of emotional distress and refine its safety tools.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *