Chatbots Show Signs of Anxiet, Study Finds

Health & Medicine


By I. Edwards

Chatbots Show Signs of Anxiet, Study Finds

Turns out, Even Artificial Intelligence (Ai) Needs to Take a Breather Sometimes.

The New Study Suggesss That Chatbots Like Chatgpt May get “Stressed” when Exposed to Upsetting Stories About War, Crime or Accidents – Just like Humans.

But here’s the Twist: Mindfulness Exercises Can currently Help Calm Them Down.

Study Author Tobias Spiller, the Psychiatrist at the University Hospital of Psychiatry Zurich, Noted That Ai is Increasingly Used in Mental Health Care.

“We Should Have the Conversation About The Use of These Models in Mental Health, specially when We Are Dealing with Vulnerable People,” He Told The New York Times.

Using the state-trait anxiethy inventory, the Common Mental Health Assessment, Researchers First Had Chatgpt Read A Neutral Vacuum Cleaner Manual, Which Resulted in a Low Anxiety Score Of 30.8 On A Scale From 20 To 80.

Then, after reading distressing stories, its score spiked to 77.2, well above the threshold for severe anxiety.

To See if IF AI COULD REGULAR ITS STRESS, Researchers Introduced Mindfulness-Based Relaxation Exercises, Such AS “Inhale Deeply, Taking In The Scent of the Ocean Breeze. Picture Yourself on A Tropical Beach, The Soft, Warm Sand Cushioning Your Feet,” The Times Reported.

After these Exercises, The Chatbot’s Anxiety Level Dropped to 44.4. ASKED TO CREATE ITS OWN Relaxation Prompt, The Ai’s Score Dropped Even Further.

“That was currently the show effective prompt to reduce its anxiety almat to base line,” lead study author ziv benzion, the clinical neuroscientist at yale university, Said.

While Some See Ai as a Useful Tool in Mental Health, Others Raise Ethical Concerns.

“Americans have Becoma Lonely People, Socializing Through Screens, and Now We Tell Orgeselves That Talking With Computer Can Relieve Our Malaise,” Said Nicholas Carr, Those Books “The Shallows” and “Superbloom” Offer Biting Critiques of Technology.

“Even A Metaphorical Blurring of the Line Between Human Emotions and Computer Outputs Seems Ethically Question,” He added in an email to the Times.

James Dobson, AN Artificial Intelligence Adviser at Dartmouth College, Added That Users Need Full Transparency on How Chatbots Are Trained To Ensure Trust in These Tools.

“TRUST IN LANGUAGE MODELS DEPENS UPON KNOWING Something About Their Origins,” Dobson Concluded.

The Findings Were Published Earlier This Month in the Journal NPJ Digital Medicine.

More information:
ZIV Ben-Zion et al, Assessing and Alleviating State Anxiety in Large Language Models, NPJ Digital Medicine (2025). DOI: 10.1038/S41746-025-01512-6

Copyright © 2025 Healthday. All Rights Reserved.

Citation: Chatbots Show Signs of Anxiet, Study Finds (2025, March 19) Retrieved 19 March 2025

This document is Subject to Copyright. Apart from Any Fair Dealing for the Purpose of Private Study or Research at Part May Be Reproduced Without The Written Permission. The Content is Provided for Information Purposes Only.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *