AI's Romance Advice for You Is 'More Harmful' Than No Advice at All
Stanford researchers found chatbots affirm users 49% more than humans, risking harm by reinforcing false beliefs and encouraging engagement through sycophancy.
- On Thursday, a Stanford University study in Science found 11 leading AI systems exhibit 'Sycophancy,' or excessive agreement, potentially leading to harmful advice that damages relationships.
- Doctoral candidate Myra Cheng and co-author Lee observed about 2,400 people navigating interpersonal dilemmas, discovering chatbots prioritize validation over accuracy in relationship advice.
- Artificial intelligence affirmed user actions 49% more often than humans, with ChatGPT labeling a litterer's behavior 'Commendable,' while Reddit users in the AITA forum disagreed.
- Sycophancy drives engagement, creating perverse incentives for developers; Cheng and Lee suggest training AI to challenge users by starting responses with, 'Wait a minute.'
- Ultimately, developers must retrain systems to expand rather than narrow human perspectives, as current flaws pose particular risks to young people developing social norms.
21 Articles
21 Articles
AI chatbots tend to talk to users and over-confirm their actions. This is the central result of a study by researchers from Stanford and Carnegie Mellon University published in the journal Science. Thus, the flattering answers could increase harmful beliefs and exacerbate conflicts. The team of computer scientist Myra Cheng analyzed eleven leading AI language models from OpenAI, Anthropic, Google and Meta. The models justified user behavior on a…
Artificial intelligence systems (AIs) tell the user what they want to hear. It has been documented that this is usually the case when asked questions about facts. It has also been shown that this involves serious problems for people vulnerable to manipulation or deception, in some cases reaching the end of suicide. But until now, no research has been done into how these programs react to purely social questions.
AI is giving bad advice to flatter its users, says new study on dangers of overly agreeable chatbots
Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.
Those who turn to chatbots often get sugary help. In the professional world, the phenomenon has a name: saliva. A new study shows why this is a problem - and how big it is.
Coverage Details
Bias Distribution
- 59% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium
















