Published 1 day ago • loading... • Updated 17 hours ago
ChatGPT now lets you name someone to check in if things get dark
The optional feature uses trained human reviewers before any alert is sent and does not share chat transcripts, OpenAI said.
OpenAI is rolling out a new feature called Trusted Contact for adult users, allowing individuals to nominate one person to be alerted if ChatGPT detects serious self-harm concerns.
The update addresses safety risks, as 1.3m users—or 0.15 per cent of ChatGPT's user base—have reportedly expressed a risk of self-harm or suicide while interacting with the platform.
A specially trained team reviews flagged conversations to assess risks, aiming to complete the review in under one hour without sharing specific chat transcripts or conversation details.
Munmun Choudhury, a professor of Interactive Computing at Georgia Tech, praised the feature as a step toward empowerment, though OpenAI emphasizes Trusted Contact does not replace crisis hotlines like 988.
Developed with input from the American Psychological Association, the optional tool requires nominated contacts to be at least 18 years old, preserving user autonomy in their safety support systems.
Depressive thoughts, hopeless mood - not a few people turn to AI with their problems. At ChatGPT, a new security function should recognize self-hazards in the future.