AI Agents Are Getting Better. Their Safety Disclosures Aren't
MIT's 2025 AI Agent Index found 25 of 30 AI agents lack safety testing details and 23 provide no third-party data, raising concerns about transparency and risk management.
9 Articles
9 Articles
Most AI Bots Lack Published Formal Safety and Evaluation Documents, Study Finds
Story: Fred Lewsey. Reviewed by Ayaz Khan. An investigation into 30 top AI agents finds just four have published formal safety and evaluation documents relating to the actual bots. Many of us now use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports. However, a new study of the “AI agent ecosystem” suggests that as these AI bots rapid…
AI agents are fast, loose and out of control, MIT study finds
The vast majority of agentic AI systems disclose nothing about what safety testing, if any, has been conducted, and many systems have no documented way to shut down a rogue bot, a study by MIT and collaborators found.
Most AI bots lack basic safety disclosures, study finds
Many people use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports. However, a new study of the "AI agent ecosystem" suggests that as these AI bots rapidly become part of everyday life, basic safety disclosure is "dangerously lagging."
AI agents are becoming more and more popular. However, standards on their safety and behavior are almost completely lacking. This is shown by the AI Agent Index 2025.
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium






