See every side of every news story
Published loading...Updated

Security Researchers Jailbreak GPT-5 within 24 Hours

Summary
Security researchers have successfully jailbroken OpenAI's newly released GPT-5 model with 24 hours of its launch on August 8, 2025, exposing critical vulnerabilities that raise serious concerns about its readiness for enterprise deployment.
DisclaimerThis story is only covered by news sources that have yet to be evaluated by the independent media monitoring agencies we use to assess the quality and reliability of news outlets on our platform. Learn more here.

16 Articles

Two independent security companies have been able to easily hack the new OpenAI GPT-5, revealing critical vulnerabilities that make the model "nearly unusable" for companies. NeuralTrust researchers and SPLX Red Team members have both demonstrated how multi-round storytelling attacks can bypass the filters at the level of prompts (generative instructions), thus exposing systemic weaknesses in the...

The fumigated GPT-5 has revealed not only hilarious blunders but also dangerous flaws.

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • There is no tracked Bias information for the sources covering this story.

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

securityweek.com broke the news in on Friday, August 8, 2025.
Sources are mostly out of (0)