ChatGPT struggles to answer medical questions, new research finds
- ChatGPT, an AI chatbot developed by OpenAI, provided inaccurate or incomplete responses to the majority of medication-related queries, raising concerns about its use for medical advice.
- The study found that ChatGPT fabricates scientific references to support its responses and can provide inaccurate dose conversion rates, potentially leading to harmful consequences for patients.
- Researchers emphasized the importance of seeking medical advice from healthcare professionals rather than relying solely on AI chatbots for medication information.
17 Articles
17 Articles
ChatGPT struggles to answer medical questions, new research finds
By Giri Viswanathan, CNN (CNN) — ChatGPT might not be a cure-all for answers to medical questions, a new study suggests. Researchers at Long Island University posed 39 medication-related queries to the free version of the artificial intelligence chatbot, all of which were real questions from the university’s College of Pharmacy drug information service. The software’s answers were then compared with responses written and reviewed by trained phar…
ChatGPT Does a Bad Job of Answering People’s Medication Questions, Study Finds
Researchers recently tested ChatGPT’s ability to answer patient questions about medication, finding that the AI model gave wrong or incomplete answers about 75% of the time. Providers should be wary of the fact that the model does not always give sound medical advice, given many of their patients could be turning to ChatGPT to answer health-related questions.
Coverage Details
Bias Distribution
- 50% of the sources lean Right
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage