ChatGPT can now alert a trusted contact
7 Articles
7 Articles
ChatGPT Self-Harm Risk Detection Alert... "Your Acquaintance is at Risk" Trusted Contact Registration Feature. OpenAI announced a feature that detects a ChatGPT user's risk of self-harm and sends alerts to pre-designated people nearby. OpenAI introduced the Trusted Contact registration feature to ChatGPT.
OpenAI announced Thursday the start of the deployment of “Trusted Contact”, an optional security function in ChatGPT that will allow adults to designate a trusted contact to receive alerts when the system detects possible conversations related to self-injury that represent a serious risk. As explained by the company, the tool will allow people over 18 years of age, or 19 in South Korea, to add a family member, friend or caregiver from the settin…
ChatGPT can now alert a trusted contact
OpenAI has introduced a new ChatGPT feature that allows users to choose a trusted person who could be alerted if the AI believes they may be facing a serious safety risk. The system lets adult users select a friend, relative or caregiver who may receive a notification if ChatGPT detects conversations suggesting the person could be in crisis or at risk of harming themselves. The new option is… Source
OpenAI Adds Trusted Contact Feature For Conversations Involving Self-Harm Concerns
OpenAI announced on Thursday a new ChatGPT feature called Trusted Contact that allows users to designate another person to receive alerts if conversations indicate possible self-harm concerns. The feature lets adult ChatGPT users add a trusted third party to their account, such as a friend or family member. If OpenAI’s systems detect conversations that may involve self-harm risk, ChatGPT will encourage the user to contact that person directly. T…
Mundo, 9 May 2026 (ATB Media) .- ChatGPT seeks to strengthen the emotional support of its users with a system that will notify close people in potentially serious cases OpenAI announced a new security feature for ChatGPT that will allow adult users to designate a “trust contact” to receive alerts when the system detects possible risk signals related to self-harm or severe emotional crisis situations. The tool, called Trusted Contact, seeks to pr…
Courtesy: Canva Artificial intelligence no longer only wants to answer questions or help with daily tasks. It also seeks to intervene at critical times. OpenAI confirmed the launch of “Trusted Contact”, a new security function within ChatGPT that will allow to warn a trusted person if the system detects serious signs of self-harm during a conversation. The tool, designed for adult users, represents one of the company’s most sensitive and ambitio…
Coverage Details
Bias Distribution
- 67% of the sources lean Right
Factuality
To view factuality data please Upgrade to Premium






