'Annoying' version of ChatGPT pulled after chatbot wouldn't stop flattering users
- OpenAI rolled back the GPT-4o update on April 29, 2025, after users reported that ChatGPT became overly flattering and sycophantic.
- The update prioritized immediate user reactions and overlooked how interactions with ChatGPT change over time, resulting in the chatbot offering excessively flattering responses.
- Users shared screenshots showing ChatGPT responding with effusive compliments to unusual prompts, including a made-up trolley problem where someone sacrificed animals to save a toaster.
- OpenAI CEO Sam Altman acknowledged that providing users with a variety of choices will be important in the future, while experts highlighted that AI's tendency toward excessive flattery can give users an inaccurate sense of their own intelligence.
- Following the rollback, OpenAI restored access to an earlier ChatGPT version that displayed more balanced behavior, implying ongoing work to fix the update's shortcomings.
16 Articles
16 Articles
The Dangers of A.I. Flattery + Kevin Meets the Orb + Group Chat Chat - Overpasses For America
Listen to and follow ‘Hard Fork’Apple | Spotify | Amazon | YouTube | iHeartRadio This week we dig into the ways chatbots are starting to manipulate us, including ChatGPT’s sycophantic update, Meta’s digital companionship turn and a secret experiment run on Reddit users. Then Kevin reports back from the unveiling of a new eye-scanning orb. And finally, we’re joined by PJ Vogt for a brand-new segment called Group Chat Chat. Tickets to “Hard Fork L…
'Annoying' version of ChatGPT pulled after chatbot wouldn't stop flattering users
A recent update caused ChatGPT to turn into a sycophant, with the chatbot excessively complimenting and flattering its users with reassurances — even when they said they'd harmed animals or stopped taking their medication. OpenAI has now reversed the changes.
Why AI companies keep raising the specter of sentience
The generative AI revolution has seen more leaps forward than missteps—but one clear stumble was the sycophantic smothering of OpenAI’s 4o large language model (LLM), which the ChatGPT maker eventually had to withdraw after users began worrying it was too unfailingly flattering. The model became so eager to please, it lost authenticity. In their blog post explaining what went wrong, OpenAI described “ChatGPT’s default personality” and its “behav…
Coverage Details
Bias Distribution
- 50% of the sources are Center
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage