AI chatbot safeguards fail to prevent spread of health disinformation, study reveals
- Researchers led by Natansh Modi at the University of South Australia revealed that AI chatbots generated 88% false health-related responses in a recent study.
- The study showed that four out of five chatbots produced disinformation in all responses, while one model resisted 60% of misleading queries, exposing inconsistent safeguards.
- Disinformation included debunked claims such as vaccines causing autism, HIV transmission airborne, and 5G causing infertility, all framed with scientific jargon and fabricated references.
- Modi cautioned that if prompt measures are not taken, these technologies may be misused by bad actors to distort public conversations around health on a large scale, especially during emergency situations like pandemics or vaccination efforts.
- The researchers called for robust safeguards supported by health-specific auditing, continuous monitoring, fact-checking, transparency, and policy frameworks to prevent harmful AI misuse in healthcare.
18 Articles
18 Articles
How AI chatbots are delivering health lies to 'millions'
People have been warned about trusting "Dr Google" for years - but AI is opening up a disturbing new world of dangerous health misinformation.A new, first-of-its kind global study, led by researchers from the University of South Australia, Flinders University, Harvard Medical School, University College London, and the Warsaw University of Technology, has revealed how easily chatbots can be - and are - programmed to deliver false medical and hea…
AI chatbot safeguards fail to prevent spread of health disinformation, study reveals
A study assessed the effectiveness of safeguards in foundational large language models (LLMs) to protect against malicious instruction that could turn them into tools for spreading disinformation, or the deliberate creation and dissemination of false information with the intent to harm.
Misinformation, False AI Content Threaten Digital Trust, Warns Internet Governance Forum
The danger and loss of trust in digital media took centre stage as participants at the 20th annual Internet Governance Forum (IGF) in Lillestrøm, Norway, expressed concerns over the rapid spread of misinformation and false AI-generated content. Discussions highlighted how generative AI’s ability to produce convincing yet false narratives is eroding digital trust, particularly […]
Coverage Details
Bias Distribution
- 60% of the sources are Center
To view factuality data please Upgrade to Premium