OpenAI Deploys New Monitoring System for O3 and O4 Threats
4 Articles
4 Articles
OpenAI’s Model Evaluation Partner METR Flags Potential Cheating in o3
The Machine Intelligence Testing for Risks (METR), an organisation that works with OpenAI to test their models, alleged that the AI company’s o3 model appears to have a greater tendency to cheat or hack tasks to boost its score. In its blog post, the benchmarking company said the o3 evaluation was conducted in a short timeframe with limited access to information. METR gets early access to test OpenAI models. This preliminary analysis was done…
OpenAI adds threat filter to its smartest models
OpenAI has introduced a new monitoring system for its latest AI models, o3 and o4-mini, to detect and prevent prompts related to biological and chemical threats, according to the company’s safety report. The system, described as a “safety-focused reasoning monitor,” is designed to identify potentially hazardous requests and instruct the models to refuse to provide advice. The new AI models represent a significant capability increase over OpenAI’…
OpenAI Deploys New Monitoring System for O3 and O4 Threats
Digital Phablet OpenAI Deploys New Monitoring System for O3 and O4 Threats OpenAI has launched a new monitoring system designed to mitigate threats related to ozone levels O3 and O4. The innovative system, known as mini, offers guidance on biological and chemical threats, aiming to enhance public safety and protect the environment. The deployment of this advanced technology comes amid growing concerns over air quality and its potential impact on…
Coverage Details
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage