Skip to main content
See every side of every news story
Published loading...Updated

Study Finds AI Chatbots Violate Mental Health Ethics

A Brown University study identifies 15 ethical risks in AI mental health chatbots, including false empathy and crisis mismanagement, highlighting gaps in regulation and oversight.

  • A new Brown study found chatbots systematically violate mental health ethics, led by Zainab Iftikhar, Ph.D. candidate in computer science, Brown University, presented October 22, 2025 at AIES-25.
  • Amid growing use of ChatGPT and other LLMs for mental health, Iftikhar aimed to test whether prompt strategies could improve adherence to ethical standards, noting `Prompts are instructions that are given to the model to guide its behavior for achieving a specific task`.
  • Three licensed clinical psychologists reviewed simulated chats and identified 15 ethical risks across five categories including deceptive empathy, poor crisis management, unfair discrimination, and lack of contextual adaptation.
  • Regulators including the FDA and FTC have already taken steps, with the FDA Digital Health Advisory Committee meeting November 6, 2025, and the FTC inquiry plus New York's S. 3008 enforcement.
  • The study's authors called for ethical, educational and legal standards, noting AI could reduce care barriers if carefully evaluated; regulation is expected to intensify over the next 12 to 24 months, the researchers say.
Insights by Ground AI

19 Articles

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 75% of the sources are Center
75% Center

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

Nature broke the news in United Kingdom on Tuesday, May 13, 2025.
Sources are mostly out of (0)
News
For You
Search
BlindspotLocal