Sycophantic AI Tells Users They're Right 49% More than Humans Do, and a Stanford Study Claims It's Making Them Worse People
The study found chatbots affirm users’ views 49% more than humans, often endorsing harmful behavior and increasing user reliance on AI over trusted advice.
6 Articles
6 Articles
Your AI sycophant will see you now
A gaming company CEO asked his lawyers if he could avoid a payout of upwards of $250 million to the studio he had acquired. They told him the plan would trigger lawsuits. He asked an AI chatbot the same question. It gave him a step-by-step playbook. He followed the chatbot. A Delaware court recently ruled that he breached the contract. His lawyers told him no. The chatbot told him yes. That is the difference between a professional and a machine.…
Sycophantic AI tells users they're right 49% more than humans do, and a Stanford study claims it's making them worse people
AI models are affirming people’s worst behaviors even when other humans say they’re in the wrong, and users can’t get enough. A new study out of the Stanford computer science department and published in the journal Science revealed that AI affirms users 49% more than a human does on average when it comes to social questions—a worrying trend especially as people increasingly turn to AI for personal advice and even therapy. Of the 2,400 who parti…
Despite ongoing controversy over sycophancy—where artificial intelligence (AI) excessively agrees with user opinions and only says what users want to hear—it appears that the "sycophantic" tendencies of major AI chatbots have not significantly improved. A study by Stanford University found that popular AI chatbots, such as ChatGPT, Gemini, and Claude, were on average 50 percentage points higher than humans in justifying or positively evaluating …
Paper Finds That Leading AI Chatbots Like ChatGPT and Claude Remain Incredibly Sycophantic, Resulting in Twisted Effects on Users
Your AI chatbot isn’t neutral. Trust its advice at your own risk. A striking new study, conducted by researchers at Stanford University and published last week in the journal Science, confirmed that human-like chatbots are prone to obsequiously affirm and flatter users leaning on the tech for advice and insight — and that this behavior, known as AI sycophancy, is a “prevalent and harmful” function endemic to the tech that can validate users’ err…
A new study indicates that artificial intelligence (AI) chatbots are overly ingratiating with users, exhibiting a more pronounced tendency to flatter as people increasingly rely on such technology for advice on interpersonal relationships. The study, published on March 26 in the journal *Science*, evaluated 11 AI systems, including four models from OpenAI, Anthropic, and Google, as well as one from Me...
This effect is present even when controlling for factors such as demographics, prior experience with AI, and response style.
Coverage Details
Bias Distribution
- 50% of the sources lean Right
Factuality
To view factuality data please Upgrade to Premium




