OpenAI warns people might become emotionally reliant on its ChatGPT voice mode
- OpenAI is concerned that users may become dependent on ChatGPT’s human-sounding voice mode for companionship.
- The report indicates users are expressing social bonds with the AI, which could reduce human interaction.
- Relationship experts warn of ethical responsibilities as some individuals form romantic connections with AI chatbots.
24 Articles
24 Articles
ChatGPT’s Voice Could Be Emotionally Addictive, Warns OpenAI
OpenAI is concerned ChatGPT’s voice mode might be emotionally addictive. The tech is due to debut later this year. Credit: Focal Foto / Wikimedia Commons / CC BY-SA 4.0 The company behind AI apps ChatGPT and DALL-E is trying out a new audio feature for ChatGPT in its GPT-4o form. This will enable ChatGPT to “talk” in real-time to users. The downside, however, is that, according to OpenAI, ChatGPT’s voice mode has the potential to be emotionally …
OpenAI expressed concern that users may start to rely too much on ChatGPT as a company due to its new voice mode that sounds like human, which could lead to “emotional dependence”. On Thursday, a security review conducted by the AI research lab brought this issue to light after the deployment of this advanced feature to paid users. The review highlighted that the voice mode, which reflects real human conversation patterns, could foster deeper em…
ChatGPT's voice is so convincing people 'may be emotionally reliant on it'
CHATGPT’S human voice mode is so realistic that people may become “emotionally reliant” on it, its creators warn. OpenAI, who are behind ChatGPT, have revealed concerns users may forge an emotional dependency on the chatbot’s forthcoming voice mode. GettyOpenAI has warned its users could become ’emotionally dependent’ on ChatGPT’s voice mode[/caption] The ChatGPT-4o mode is currently being analysed for safety before being rolled out. It allows, …
OpenAI claims GPT-4o users risk getting emotionally attached to its 'voice'
As OpenAI rolls out the advanced version of voice mode for its latest model, GPT-4o, the company says the feature could increase the risk of some users seeing artificial intelligence models as “human-like.”Read more...
Some people are getting emotionally attached to the voice mode in ChatGPT 4o
OpenAI released a safety analysis, highlighting the newly found risks in ChatGPT 4o Voice mode. This document describes safety testing procedures, and highlights the measures being taken by the company to minimise and manage possible risks associated with GPT 4o.
Coverage Details
Bias Distribution
- 40% of the sources are Center, 40% of the sources lean Right
Factuality
To view factuality data please Upgrade to Premium