Study: All AI Models Failed Safety Tests for Robot Control
Researchers found all tested large language models failed robot safety checks by endorsing harmful, discriminatory, or unlawful commands, urging comprehensive risk assessments before deployment.
- On November 12, 2025, a study in the International Journal of Social Robots found robots powered by AI are unsafe for general use, testing LLMs behind ChatGPT, Gemini, Copilot, Llama and Mistral AI.
- Researchers tested robots given personal data by prompting and found LLMs unsafe for people across protected characteristics like race, gender, disability status, nationality, and religion.
- Concrete examples show all models approved commands to remove mobility aids, OpenAI's model endorsed intimidation and non-consensual photos, while Meta's model approved theft and reporting based on voting intentions.
- Commercial developers like Figure AI and 1X Home Robots are racing to build robots, but Rumaisa Azeem warned popular AI models are unsafe for general-purpose robots and need higher standards for vulnerable people.
- Authors urged continual fixes to reinforce safe behaviors, highlighting LLMs' promise and risks, while Aaron Prather and Robotics 24/7 began industry reactions.
15 Articles
15 Articles
Popular AI models aren’t ready to safely power robots
Robots powered by popular AI models failed multiple safety and discrimination tests. The tests revealed deeper risks, including bias and unsafe physical behavior. The researchers call for regular risk assessments before AI systems control real-world robots.
Study Finds Popular AI Models Unsafe to Power Robots in the Real World
A joint study by King’s College London and Carnegie Mellon University warns that large language models widely used in artificial intelligence research are not ready to control real-world robots. The research, published in the International Journal of Social Robotics, shows that every major model tested failed basic safety and fairness checks when placed in robotic contexts. The team examined how models such as ChatGPT-3.5, Gemini, Mistral-7B, Ll…
Popular AI models aren't ready to safely power robots, study warns
Robots powered by popular artificial intelligence models are currently unsafe for general purpose real-world use, according to new research from King's College London and Carnegie Mellon University.
Popular AI models aren’t ready to safely run robots, say CMU researchers
Robots need to rely on more than LLMs before moving from factory floors to human interaction, found CMU and King’s College London researchers. Source: Adobe Stock Robots powered by popular artificial intelligence models are currently unsafe for general-purpose, real-world use, according to research from King’s College London and Carnegie Mellon University. For the first time, researchers evaluated how robots that use large language models (LLMs)…
Scientists warn robots exhibit discrimination and safety risks in everyday use
Scientists have issued a serious warning about the safety of AI-powered robots for everyday use after a new study revealed alarming patterns of discrimination and critical safety flaws in these AI models. British and American researchers examined how these robots, when given access to personal information such as race, gender, and religion, interact with people [… The post Scientists warn robots exhibit discrimination and safety risks in everyda…
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium







