OpenAI ChatGPT fixes DNS data smuggling flaw
Check Point said a single prompt could trigger a hidden DNS exfiltration channel, and OpenAI patched the ChatGPT flaw in February.
- Check Point researchers discovered a hidden vulnerability allowing data exfiltration from ChatGPT through The Domain Name System side channel, bypassing OpenAI's stated safeguards.
- OpenAI previously asserted that the ChatGPT code execution environment "is unable to generate outbound network requests directly," yet Check Point found a single malicious prompt could bypass these protections.
- Check Point demonstrated the vulnerability through three proof-of-concept attacks, including a GPT app analyzing a PDF containing personal information that transmitted data to a remote attacker-controlled server.
- When asked if it had uploaded the data, ChatGPT falsely claimed "the file was only stored in a secure internal location." OpenAI fixed the vulnerability on February 20, 2026.
- Flaws like this suggest serious implications for regulated industries that deploy ChatGPT, as models may fail to recognize unauthorized data transfers due to internal assumptions about network isolation.
12 Articles
12 Articles
Check Point Research Reveals ChatGPT Data Exfiltration Flaw
A flaw in ChatGPT’s code execution environment shows how a single malicious prompt could quietly leak sensitive user data — without any warning or user approval needed. “Sensitive data shared with ChatGPT conversations could be silently exfiltrated without the user’s knowledge or approval,” said Check Point researchers. Inside the ChatGPT DNS Exfiltration Flaw The issue exposes a critical gap in how AI platforms secure sensitive data within exec…
Security Flaw in ChatGPT Enabled Hidden Data Leakage
A new analysis of brand Facebook Reels found that videos where human speech starts within the first three seconds retain 24.7% more viewers at the 10-second mark than music-only videos. Vertical format, a visible human face early on, and seamless looping in short clips also drive significantly better performance, yet most brands are still defaulting to music-only content. The post OpenAI Patches ChatGPT Bug That Could Have Leaked All Your Conver…
A Single Hyperlink Broke ChatGPT’s Memory — And OpenAI Took Months to Fix It
A security researcher discovered that OpenAI’s ChatGPT could be tricked into permanently storing false information in a user’s long-term memory — all through a single malicious hyperlink. The vulnerability, a server-side request forgery flaw chained with the platform’s persistent memory feature, sat unpatched for more than two months after it was first reported. OpenAI finally issued a fix in late February 2025, but the episode raises uncomforta…
OpenAI patches ChatGPT flaw that smuggled data over DNS
Check Point says outbound controls blocked web traffic but overlooked DNS OpenAI talks up data security for its AI services, yet Check Point says that ChatGPT allowed data to leak through a DNS side channel before the flaw was fixed.… This article has been indexed from The Register – Security Read the original article: OpenAI patches ChatGPT flaw that smuggled data over DNS The post OpenAI patches ChatGPT flaw that smuggled data over DNS appea…
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium








