OpenAI Atlas Browser Tripped up by Malformed URLs
8 Articles
8 Articles
ChatGPT Atlas address bar a new avenue for prompt injection, researchers say
A prompt disguised as a URL could be copied and pasted by an unsuspecting user. Introduction to Malware Binary Triage (IMBT) Course Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flav…
ChatGPT’s Atlas Browser Jailbroken to Hide Malicious Prompts Inside URLs
Security researchers at NeuralTrust have uncovered a critical vulnerability in OpenAI’s Atlas browser that allows attackers to bypass safety measures by disguising malicious instructions as innocent-looking web addresses. The flaw exploits how the browser’s omnibox interprets user input, potentially enabling harmful actions without proper security checks. The Omnibox Vulnerability Explained Atlas features an omnibox that […] The post ChatGPT’s A…
In a blog article published on October 24, 2025, researchers from the Cybersecurity company NeuralTrust revealed a method for a malicious actor to bypass the IA Assistant's security mechanisms embedded in the new ChatGPT Atlas browser. Since the launch of the new
OpenAI introduced the AI browser Atlas last week. Security experts discovered several critical vulnerabilities a few hours later. Prompt injection attacks can cause the browser to turn against its own users. (Read more)
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
The newly released OpenAI ChatGPT Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit. "The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent," NeuralTrust said in a report
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium




