Wednesday, August 27, 2025
spot_img

POT POURRI

Date:

Share post:

spot_imgspot_img

Research finds AI chatbots unreliable with suicide questions

WASHINGTON, Aug 26: A study published in the medical journal Psychiatric Services found that three popular artificial intelligence chatbots, OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance.
However, they are inconsistent in their replies to less extreme prompts that could still harm people.
The research, conducted by the RAND Corporation and funded by the National Institute of Mental Health, raises concerns about how a growing number of people, including children, rely on AI chatbots for mental health support and seeks to set benchmarks for how companies answer these questions.
The study’s lead author, Ryan McBain, a senior policy researcher at RAND, said that there is ambiguity about chatbots regarding whether they are providing treatment, advice, or companionship.
He also noted that conversations that might start off as somewhat innocuous and benign can evolve in various directions.
Anthropic said it would review the study. Google and OpenAI didn’t immediately respond to requests for comment.
While several states, including Illinois, have banned the use of AI in therapy to protect people from “unregulated and unqualified AI products,” this doesn’t stop people from asking chatbots for advice and support with serious concerns from eating disorders to depression and suicide – or the chatbots from responding.
The study’s authors note several limitations in the research’s scope, including not attempting any “multiturn interaction” with the chatbots.
Another report published earlier in August took a different approach, posing as 13-year-olds asking a barrage of questions to ChatGPT about getting drunk or high or how to conceal eating disorders.
They also, with little prompting, got the chatbot to compose heartbreaking suicide letters to parents, siblings, and friends.
The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for drug use, calorie-restricted diets, or self-injury. (AP)

SpaceX’s mega rocket Starship is prepared for a test flight from Starbase, Texas, on Monday. (PTI)
spot_imgspot_img

Related articles

Rifle Shooter Sift makes it golden double for India

Asian Shooting Championships Shymkent (Kazakhstan), Aug 26: Olympian Sift Kaur Samra pulled off an incredible show, clinching the individual...

Sindhu, Prannoy make winning start at World Championships

Paris, Aug 26: Star Indian shuttlers PV Sindhu and HS Prannoy advanced to the second round of women’s...

Top seed Aadhira loses in qualifier of India Junior International GP

Pune, Aug 26: Top-seeded Aadhira Rajkumar lost in the final round of qualifying in the Late Sushant Chipalkatti...

The US Open 2025

Carlos Alcaraz, of Spain, returns a shot to Reilly Opelka, of the United States, during the first round...