Study Finds AI Chatbots Violate Mental Health Ethics
A Brown University study identifies 15 ethical risks in AI mental health chatbots, including false empathy and crisis mismanagement, highlighting gaps in regulation and oversight.
- A new Brown study found chatbots systematically violate mental health ethics, led by Zainab Iftikhar, Ph.D. candidate in computer science, Brown University, presented October 22, 2025 at AIES-25.
- Amid growing use of ChatGPT and other LLMs for mental health, Iftikhar aimed to test whether prompt strategies could improve adherence to ethical standards, noting `Prompts are instructions that are given to the model to guide its behavior for achieving a specific task`.
- Three licensed clinical psychologists reviewed simulated chats and identified 15 ethical risks across five categories including deceptive empathy, poor crisis management, unfair discrimination, and lack of contextual adaptation.
- Regulators including the FDA and FTC have already taken steps, with the FDA Digital Health Advisory Committee meeting November 6, 2025, and the FTC inquiry plus New York's S. 3008 enforcement.
- The study's authors called for ethical, educational and legal standards, noting AI could reduce care barriers if carefully evaluated; regulation is expected to intensify over the next 12 to 24 months, the researchers say.
19 Articles
19 Articles


New study: AI chatbots systematically violate mental health ethics standards
Researchers at Brown University found that AI chatbots routinely violate core mental health ethics standards, underscoring the need for legal standards and oversight as use of these tools increases.
New study details how AI chatbots systematically violate ethical standards of practice
As more people turn to ChatGPT and other large language models (LLMs) for mental health advice, a new study details how these chatbots - even when prompted to use evidence-based psychotherapy techniques - systematically violate ethical standards of practice established by organizations like the American Psychological Association.
New study shows AI chatbots systematically violate mental health ethics standards
As more people turn to ChatGPT and other large language models (LLMs) for mental health advice, a new study details how these chatbots—even when prompted to use evidence-based psychotherapy techniques—systematically violate ethical standards of practice established by organizations like the American Psychological Association.
Coverage Details
Bias Distribution
- 75% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium