Research finds AI chatbots unreliable with suicide questions
WASHINGTON, Aug 26: A study published in the medical journal Psychiatric Services found that three popular artificial intelligence chatbots, OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance.
However, they are inconsistent in their replies to less extreme prompts that could still harm people.
The research, conducted by the RAND Corporation and funded by the National Institute of Mental Health, raises concerns about how a growing number of people, including children, rely on AI chatbots for mental health support and seeks to set benchmarks for how companies answer these questions.
The study’s lead author, Ryan McBain, a senior policy researcher at RAND, said that there is ambiguity about chatbots regarding whether they are providing treatment, advice, or companionship.
He also noted that conversations that might start off as somewhat innocuous and benign can evolve in various directions.
Anthropic said it would review the study. Google and OpenAI didn’t immediately respond to requests for comment.
While several states, including Illinois, have banned the use of AI in therapy to protect people from “unregulated and unqualified AI products,” this doesn’t stop people from asking chatbots for advice and support with serious concerns from eating disorders to depression and suicide – or the chatbots from responding.
The study’s authors note several limitations in the research’s scope, including not attempting any “multiturn interaction” with the chatbots.
Another report published earlier in August took a different approach, posing as 13-year-olds asking a barrage of questions to ChatGPT about getting drunk or high or how to conceal eating disorders.
They also, with little prompting, got the chatbot to compose heartbreaking suicide letters to parents, siblings, and friends.
The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for drug use, calorie-restricted diets, or self-injury. (AP)
