Skip to main content
See every side of every news story
Published loading...Updated

Using your AI chatbot as a search engine? Be careful what you believe

Generative AI chatbots produce misinformation 45% of the time, leading to real-world harms including medical errors and dangerous advice, reflecting historic risks from UK wartime pamphlets.

Kevin Veale, Te Kunenga ki Pūrehuroa – Massey University During the first world war, the British government was looking for ways to help people stretch their limited food supplies. It found pamphlets from a noted 19th-century herbalist who said rhubarb leaves could be used as a vegetable along with the stalks.Image: Zulfugar Karimov / Unsplash The government duly printed its own pamphlets advising people to eat rhubarb leaves as a salad rather t…

7 Articles

The ConversationThe Conversation
+4 Reposted by 4 other sources
Center

Using your AI chatbot as a search engine? Be careful what you believe

Because of the way generative AI works, there is no real way to prevent false information being presented as truth – or to correct it permanently.

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 75% of the sources are Center
75% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

The Conversation broke the news in on Sunday, March 22, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal