Skip to main content
See every side of every news story
Published loading...Updated

AI Chatbots Give Misleading Medical Advice 50% of the Time, Study Finds

Researchers found 50% of answers from five popular chatbots were problematic, and Grok had the highest rate at 58%, according to BMJ Open.

  • A new BMJ Open study found half of medical information provided by five major chatbots was "problematic," with nearly 20 per cent deemed highly problematic. Researchers evaluated ChatGPT, Gemini, Meta, Grok, and DeepSeek across five health categories.
  • Chatbots often "hallucinate," generating incorrect medical responses due to biased training data and a tendency to prioritize user beliefs over truth. These systems lack clinical judgment and are not licensed to provide professional health advice.
  • Grok returned the most problematic responses at 58 per cent, followed by ChatGPT at 52 per cent and Meta at 50 per cent. Previous work found only 32 per cent of citations from specific models were accurate.
  • Experts warned that expanding chatbot use in medicine requires "diligent oversight" to prevent misinformation amplification. Researchers emphasized the need for public education and professional training to ensure chatbots support rather than erode public health.
  • Despite these limitations, more than 200 million people weekly ask ChatGPT health questions. The study authors concluded developers must reevaluate how these tools are deployed in public-facing health communication.
Insights by Ground AI

25 Articles

Lean Left

Five widely used artificial intelligence chatbots often produced problematic answers to health and medical questions, according to a recent study. The researchers, in a study published yesterday in BMJ Open, tested the chatbots Gemini, DeepSeek, Meta AI, ChatGPT and Grok with 50 questions in five categories prone to misinformation. These included cancer, vaccines, stem cells, nutrition and athletic performance. The questions, asked in February 2…

Lean Right

Chatbots based on artificial intelligence (IA) are providing problem medical advice to users about half of the time, according to a new study, highlighting the risks to health of this technology that is becoming increasingly essential every day. Puberness has happened more and more quickly in girls: this is normal? What are the causes? Doctors explain five tips to help remember how to work best researchers in the United States, Canada and the Un…

·Brazil
Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 40% of the sources are Center
40% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

Bloomberg broke the news in United States on Tuesday, April 14, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal