Before Launching GPT-5, OpenAI Is Adding Mental Health Guardrails To ChatGPT
UNITED STATES, AUG 6 – OpenAI introduces break reminders and avoids direct mental health advice in ChatGPT after reports of emotional distress, guided by input from over 90 medical experts worldwide.
- Launching an update, OpenAI published a blog post titled `What we’re optimizing ChatGPT for` detailing three changes including ending direct answers on emotional distress and adding gentle reminders during long sessions.
- Recent research by British NHS doctors found the AI might amplify paranoid or extreme content for susceptible users, months after the warnings.
- OpenAI collaborated with more than 90 medical experts across 30 countries and formed an advisory group of mental health, youth development, and HCI researchers to refine safeguards.
- OpenAI’s changes position ChatGPT as a tool for reflection, designed to prevent emotional dependency and avoid replacing professional help, maintaining trust during vulnerable moments.
- Soon, new actions for ChatGPT will be implemented, raising broader questions about how users interact with intelligent systems as AI becomes more responsive and convincing.
12 Articles
12 Articles
ChatGPT Now Issuing Warnings to Users Who Seem Obsessed
Months after OpenAI was warned about the potential psychological harms ChatGPT can cause for its users — particularly those predisposed to mental health struggles — the company says it's rolled out an "optimization" meant to calm the fears of mental health experts who have become increasingly alarmed about the risks its software poses. Yesterday, the company released a sheepish blog post titled "What we’re optimizing ChatGPT for," detailing thre…
Specialists warn of the dangers of using ChatGPT as a therapist, and insist: it is unable to understand the emotions of any human being.
Healthcare AI News 8/6/25 – HIStalk
News OpenAI will improve ChatGPT’s ability to detect signs of mental health issues or emotional stress after reports that it sometimes reinforces user delusions. The company says that AI can feel more personal and responsive than other technologies, which can be problematic to someone who is experiencing mental issues. Clinicians credit Epic’s AI, which flags keywords in radiology reports, with helping identify lung cancer in a patient who was i…
Tech firms, states look to rein in AI chatbots’ mental health advice
(Axios) – Concerns over Americans turning to AI chatbots to solve mental health problems are prompting new guardrails so people don’t become too dependent on unvetted technology. Why it matters: AI’s booming popularity, the bots’ reputation for delivering emotionally validating responses and a shortage of therapists are making more people turn to chatbot companions to talk through their problems. The big picture: The bots aren’t designed for tho…
Coverage Details
Bias Distribution
- 75% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium