Using your AI chatbot as a search engine? Be careful what you believe
Generative AI chatbots produce misinformation 45% of the time, leading to real-world harms including medical errors and dangerous advice, reflecting historic risks from UK wartime pamphlets.
7 Articles
7 Articles
Using your AI chatbot as a search engine? Be careful what you believe
Large language models are looking to generate reasonable-looking sentences, rather than accurate ones, meaning they can churn out misinformation faster than people can produce safe information, let alone fact-check and correct it. During the first world war, the British government was looking for ways to help people stretch their limited food supplies. It found pamphlets from a noted 19th-century herbalist who said rhubarb leaves could be used a…
The Conversation: Using your AI chatbot as a search engine? Be careful what you believe
The Conversation: Using your AI chatbot as a search engine? Be careful what you believe. “Search engines rely on articles and text about a given topic, and then weigh how reliable those articles are. Generative AI instead relies on huge bodies of text, from which it measures the odds of words appearing next to each other. These ‘large language models’ are purely looking to generate reasonable-looking sentences, rather than accurate ones.”
Coverage Details
Bias Distribution
- 75% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium




