ChatGPT struggles to answer medical questions, new research finds
- ChatGPT, an AI chatbot developed by OpenAI, provided inaccurate or incomplete responses to the majority of medication-related queries, raising concerns about its use for medical advice.
- The study found that ChatGPT fabricates scientific references to support its responses and can provide inaccurate dose conversion rates, potentially leading to harmful consequences for patients.
- Researchers emphasized the importance of seeking medical advice from healthcare professionals rather than relying solely on AI chatbots for medication information.
17 Articles
17 Articles
ChatGPT struggles to answer medical questions, new research finds
By Giri Viswanathan, CNN (CNN) — ChatGPT might not be a cure-all for answers to medical questions, a new study suggests. Researchers at Long Island University posed 39 medication-related queries to the free version of the artificial intelligence chatbot, all of which were real questions from the university’s College of Pharmacy drug information service. The software’s answers were then compared with responses written and reviewed by trained phar…
ChatGPT declined to answer drug-related questions
Specialists from Long Island University (USA) found that the free version of the ChatGPT chatbot provides incorrect data related to medicines. The study was published on the organization's website. During the experiment, scientists asked the program 45 questions on medical topics. The chatbot refused to answer 11 of them directly, gave inaccurate answers to 10, and answered 12 questions incorrectly. For each question, the researchers asked for a…
Coverage Details
Bias Distribution
- 54% of the sources lean Right
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage