OpenAI Is Scanning Users' ChatGPT Conversations and Reporting Content To Police
4 Articles
4 Articles
OpenAI Confirms It May Report Dangerous ChatGPT Conversations to Police
OpenAI has confirmed that conversations on ChatGPT could be reviewed and, in extreme cases, referred to law enforcement if they contain violent threats or indications of imminent harm to others. The company clarified that cases of self-harm, while flagged internally for safety reasons, will not be reported to police to protect user privacy. Quick Summary OpenAI says it will scan ChatGPT conversations for violent threats and, if necessary, report…
Since its beginnings, ChatGPT has often been presented as a virtual companion capable of supporting users in a variety of tasks, ranging from writing to searching for information, to moral support. But a recent announcement by OpenAI has upset this perception. The company has confirmed that it scans the conversations of its users and reserves the right, in some cases deemed extreme, to transmit them to the law enforcement forces.
OpenAI Is Scanning Users' ChatGPT Conversations and Reporting Content To Police
Futurism reports: Earlier this week, buried in the middle of a lengthy blog post addressing ChatGPT's propensity for severe mental health harms, OpenAI admitted that it's scanning users' conversations and reporting to police any interactions that a human reviewer deems sufficiently threatening. ...
OpenAI has explained in detail for the first time how ChatGPT usage is monitored and what measures are taken in the event of misuse. This makes it clearer to what extent conversations with the AI can actually be viewed and processed by humans. According to the company, specially trained review teams are deployed when interactions are deemed anomalous or suspicious. […] Source
Coverage Details
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
Factuality
To view factuality data please Upgrade to Premium