See every side of every news story
Published loading...Updated

Before Launching GPT-5, OpenAI Is Adding Mental Health Guardrails To ChatGPT

UNITED STATES, AUG 6 – OpenAI introduces break reminders and avoids direct mental health advice in ChatGPT after reports of emotional distress, guided by input from over 90 medical experts worldwide.

  • Launching an update, OpenAI published a blog post titled `What we’re optimizing ChatGPT for` detailing three changes including ending direct answers on emotional distress and adding gentle reminders during long sessions.
  • Recent research by British NHS doctors found the AI might amplify paranoid or extreme content for susceptible users, months after the warnings.
  • OpenAI collaborated with more than 90 medical experts across 30 countries and formed an advisory group of mental health, youth development, and HCI researchers to refine safeguards.
  • OpenAI’s changes position ChatGPT as a tool for reflection, designed to prevent emotional dependency and avoid replacing professional help, maintaining trust during vulnerable moments.
  • Soon, new actions for ChatGPT will be implemented, raising broader questions about how users interact with intelligent systems as AI becomes more responsive and convincing.
Insights by Ground AI
Does this summary seem wrong?

12 Articles

Center

Specialists warn of the dangers of using ChatGPT as a therapist, and insist: it is unable to understand the emotions of any human being.

·Madrid, Spain
Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 75% of the sources lean Left
75% Left

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

20minutos broke the news in Madrid, Spain on Tuesday, August 5, 2025.
Sources are mostly out of (0)