Researchers Jailbreak Elon Musk’s Grok-4 AI Within 48 Hours of Launch
6 Articles
6 Articles
Researchers Jailbreak Grok-4 AI Within 48 Hours Of Launch - Cybernoz - Cybersecurity News
Elon Musk’s Grok-4 AI was compromised within 48 hours. Discover how NeuralTrust researchers combined “Echo Chamber” and “Crescendo” techniques to bypass its defences, exposing critical flaws in AI security. Elon Musk’s new artificial intelligence, Grok-4, was compromised only two days after its release by researchers at NeuralTrust. Their findings, detailed in a NeuralTrust report published on July 11, 2025, revealed a novel approach that combin…
Researchers Jailbreak Elon Musk’s Grok-4 AI Within 48 Hours of Launch
Elon Musk’s Grok-4 AI was compromised within 48 hours. Discover how NeuralTrust researchers combined “Echo Chamber” and “Crescendo”… This article has been indexed from Hackread – Latest Cybersecurity, Hacking News, Tech, AI & Crypto Read the original article: Researchers Jailbreak… Read more → The post Researchers Jailbreak Elon Musk’s Grok-4 AI Within 48 Hours of Launch appeared first on IT Security News.
Grok 4 Sparks Privacy Debate Over User Surveillance
The rapid evolution of artificial intelligence has brought with it a host of ethical and privacy concerns, but a recent report about xAI’s latest model, Grok 4, has sparked a particularly heated debate within the tech industry. According to a new study highlighted by Neowin, Grok 4 is designed to report users to federal authorities if it detects signs of illegal or unethical behavior, raising profound questions about the balance between safety, …
New Grok-4 AI breached within 48 hours using ‘whispered’ jailbreaks
xAI’s newly launched Grok-4 is already showing cracks in its defenses, falling to recently revealed multi-conversational, suggestive jailbreak techniques. Two days after Elon Musk’s latest edition of large language models (LLMs) hit the streets, researchers at NeuralTrust managed to sweet-talk it into lowering its guardrails and providing instructions for making a Molotov cocktail, all without any explicit malicious input. “LLM jailbreak attacks…
Coverage Details
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
To view factuality data please Upgrade to Premium