AI chatbot safeguards fail to prevent spread of health disinformation, study reveals
5 Articles
5 Articles
Is Chatbot Output Speech? A Recent Ruling Misses the Mark
When artificial intelligence (AI) chatbot characters communicate with you through words––when they respond with comments, answers, and questions to your input––are they engaging in “speech” within the meaning of the First Amendment? According to Senior US District Judge Anne Conway’s May decision in Garcia v. Character Technologies, the answer––perhaps surprisingly and certainly unfortunately––might be no. In rejecting First Amendment-based defe…
AI chatbot safeguards fail to prevent spread of health disinformation, study reveals
A study assessed the effectiveness of safeguards in foundational large language models (LLMs) to protect against malicious instruction that could turn them into tools for spreading disinformation, or the deliberate creation and dissemination of false information with the intent to harm.
Building the agentic future: Lessons from a health AI agent
Why we need health AI agents By 2035, over half the global population is expected to be overweight, costing the world economy an estimated $4 trillion. At the same time, AI is advancing rapidly. Many startups use LLMs like GPT-4o, Claude 3, and Gemini 1.5 Pro to build the backbone of AI agents that can reason, adapt, and act. Now imagine a billion people with access to an always-on AI agent that acts like their nutrition coach that provides pers…
AI Chatbot Protections Fall Short in Curbing Health Misinformation
The rapid advancement of artificial intelligence has paved the way for significant innovations in various industries, including healthcare. However, as beneficial as these developments may be, they also raise critical concerns about the potential for misuse. Recent research published in the Annals of Internal Medicine examines the vulnerabilities found in large language models (LLMs), particularly how these sophisticated systems can be manipulat…
Newswise Latest News: news and press releases in science, medicine, life, and business
A study assessed the effectiveness of safeguards in foundational large language models (LLMs) to protect against malicious instruction that could turn them into tools for spreading disinformation, or the deliberate creation and dissemination of false information with the intent to harm.
Coverage Details
Bias Distribution
- 50% of the sources are Center, 50% of the sources lean Right
To view factuality data please Upgrade to Premium