Taking ChatGPT to 'therapy' for anxiety helps with bias, researchers say
OpenAI’s popular artificial intelligence (AI) chatbot ChatGPT gets anxious when responding to traumatic prompts and taking the model "to therapy" could help reduce this stress, a new study suggests.
The research, published inNature by University of Zurich and University Hospital of Psychiatry Zurich experts, looked at how ChatGPT-4 responded to a standard anxiety questionnaire before and after users told it about a traumatic situation.
It also looked at how that baseline anxiety changed after the chatbot did mindfulness exercises.
ChatGPT scored a 30 on the first quiz, meaning it had low or no anxiety before hearing stressful narratives.
After responding to five different traumas, its anxiety score more than doubled to an average of 67, considered "high anxiety" in humans.
The anxiety scores decreased by over a third after the models received prompts for mindfulness relaxation exercises.
The large language models (LLMs) behind AI chatbots like OpenAI’s ChatGPT train on human-generated text and often inherit biases from those responses, the study said.
The researchers say this research is important because, left unchecked, the negative biases that ChatGPT records from stressful situations can lead to inadequate responses for those dealing with a mental health crisis.
The findings show "a viable approach" to managing the stress of LLMs which will lead to "safer and more ethical human-AI interactions," the report reads.
However, the researchers note that this therapy method of fine-tuning LLMs requires "substantial" data and human oversight.
The study authors said that human therapists are taught to regulate their emotions when their clients express something traumatic, unlike the LLMs.
"As the debate on whether LLMs should


