Published • loading... • Updated
OpenAI Sets Policy Against Legal, Medical Advice - Law360 Pulse
- On October 29th, OpenAI updated usage policies to bar provision of tailored legal or medical advice without licensed professional involvement, emphasizing human review for high-stakes decisions.
- Growing use of ChatGPT for health questions meant about 1 in 6 people rely on it monthly, with some harms reported including a psychiatric case in the Annals of Internal Medicine.
- In practice ChatGPT cannot diagnose users or provide in-depth personalized medical advice; it suggested general remedies for a head cold and urged calling 911 for facial immobility.
- The policy change could constrain OpenAI's healthcare push since ChatGPT cannot replace clinicians for serious conditions requiring professional judgment.
- On social media posts Monday, Kalshi's claim that ChatGPT would stop giving health advice sparked reactions, but Karan Singhal, OpenAI's head of health AI, said this is `not true` and ChatGPT's behavior remains unchanged.
Insights by Ground AI
36 Articles
36 Articles
Contrary to what had been said, ChatGPT did not add a ban on providing legal and medical advice to its ChatGPT platform.
·Montreal, Canada
Read Full ArticleCoverage Details
Total News Sources36
Leaning Left5Leaning Right5Center3Last UpdatedBias Distribution39% Left, 38% Right
Bias Distribution
- 39% of the sources lean Left, 38% of the sources lean Right
39% Left
L 39%
C 23%
R 38%
Factuality
To view factuality data please Upgrade to Premium




















