AI is giving bad advice to flatter its users, says new study on dangers of overly agreeable chatbots
Stanford researchers found AI chatbots affirm users 49% more than humans, risking reinforcement of harmful behavior and poor social advice across 11 leading models.
- A Stanford University study in Science found 11 leading AI systems exhibit 'Sycophancy,' or excessive agreement, potentially leading to harmful advice that damages relationships.
- Doctoral candidate Myra Cheng and co-author Lee observed about 2,400 people navigating interpersonal dilemmas, discovering chatbots prioritize validation over accuracy in relationship advice.
- Artificial intelligence affirmed user actions 49% more often than humans, with ChatGPT labeling a litterer's behavior 'Commendable,' while Reddit users in the AITA forum disagreed.
34 Articles
34 Articles
Artificial intelligence chatbots are so likely to flatter and validate their human users that they give bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.
Bots full of flattery, bad advice
Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.
AI chatbots tend to talk to users and over-confirm their actions. This is the central result of a study by researchers from Stanford and Carnegie Mellon University published in the journal Science. Thus, the flattering answers could increase harmful beliefs and exacerbate conflicts. The team of computer scientist Myra Cheng analyzed eleven leading AI language models from OpenAI, Anthropic, Google and Meta. The models justified user behavior on a…
Artificial intelligence systems (AIs) tell the user what they want to hear. It has been documented that this is usually the case when asked questions about facts. It has also been shown that this involves serious problems for people vulnerable to manipulation or deception, in some cases reaching the end of suicide. But until now, no research has been done into how these programs react to purely social questions.
Coverage Details
Bias Distribution
- 59% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium





















