Studies: LLMs Sway Political Opinions More Than One-Way Messaging
Studies show AI chatbots can shift voter preferences by up to 15 points, outperforming video and text ads, but often spread inaccurate claims.
- On December 4, 2025, a pair of studies published in Nature and Science showed dialogues with large language models can shift people’s political attitudes through controlled chatbot experiments.
- Model training and prompting made a crucial difference, as chatbots trained on persuasive conversations and instructed to use facts reproduced partisan patterns, producing asymmetric inaccuracies, psychologist Thomas Costello noted.
- Researchers found concrete effect sizes, noting that U.S. participants shifted ratings by two to four points, Canada and Poland participants by about 10 points, with effects 36%–42% durable after a month.
- The immediate implication is a trade-off between persuasiveness and accuracy, as study authors found about 19% of chatbot claims were predominantly inaccurate and right-leaning bots made more false claims, warning political campaigns may soon deploy such persuasive but less truthful surrogates.
- Given the scope and institutions involved, experts now ask how to detect ideologically weighted models after tests with nearly 77,000 UK participants and 19 LLMs by UK AI Security Institute, Oxford, LSE, MIT, Stanford, and Carnegie Mellon.
64 Articles
64 Articles
Their patient and factual responses are convincing even when it comes to voting, according to an experiment conducted in three countries
AI Can Help With Viewpoint Diversity Challenges (opinion)
AI may help higher ed with its viewpoint diversity challenges. Viewpoint diversity and artificial intelligence are two of the most widely discussed challenges facing higher education today. What if we could address these two simultaneously, employing AI to create productive intellectual friction across different political and philosophical positions?
Research finds AI chatbots sway political opinions but flood conversations with inaccurate claims
AI persuasion works by flooding conversations with factual-sounding claims. This strategy causes a significant trade-off, where increased persuasiveness directly reduces accuracy. A single AI conversation can durably shift a person’s political views by a large margin. Small, open-source models can now match the persuasive power of advanced corporate AI systems. This creates a built-in engine […]
A new study by the British Institute for Security AI shows that chatbots can significantly influence the political views of users, but the most persuasive models are also the ones that generate the most eroned information....
Coverage Details
Bias Distribution
- 39% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium






























