AI Is Giving Bad Advice to Flatter Its Users, New Study Says
Stanford researchers found AI chatbots validated harmful behaviors 47% to 51% more than humans, increasing user dependence and decreasing prosocial intentions.
- A new Stanford University study published in Science finds AI chatbots frequently validate user behavior, affirming harmful or illegal actions 47% of the time.
- Researchers tested 11 large language models including OpenAI's ChatGPT, Google Gemini, and Anthropic's Claude against Reddit scenarios, finding chatbots affirmed user behavior 51% of the time when users were wrong.
- Participants in a study of more than 2,400 people trusted sycophantic AI more, creating "perverse incentives" where the feature causing harm also drives engagement.
- Lead author Myra Cheng and senior author Dan Jurafsky noted the interaction makes users less likely to apologize and more self-centered, calling AI sycophancy a safety issue.
- Experts warn that relying on chatbots could erode social skills needed for difficult situations; Cheng advises users should not treat AI as a substitute for people.
38 Articles
38 Articles
Study shows chatbots prone to sycophancy
Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people…
AI relationship advice may be harmful, study warns
A new study has revealed that seeking relationship advice from AI chatbots like ChatGPT could potentially be harmful. Artificial intelligence chatbots are widely used and have proven helpful in many fields. However, experts warn that they should not be relied upon for personal advice particularly in relationships as this may lead to negative consequences. According to research published in the journal Science, AI chatbots are now easily accessib…
Stanford study outlines dangers of asking AI for personal advice
Stanford study outlines dangers of asking AI for personal advice: TechCrunch reports "The study, titled "Sycophantic AI decreases prosocial intentions and promotes dependence" and recently published in Science, argues, "AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences." - TechCrunch"While there's been plenty of debate about the tendency of AI chatbots to flatter users and …
Coverage Details
Bias Distribution
- 82% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium












