Microsoft finds security flaw in AI chatbots that could expose conversation topics
Microsoft researchers found Whisper Leak can identify sensitive AI chatbot topics with over 98% accuracy by analyzing encrypted traffic metadata without decrypting messages.
- On November 10, Microsoft revealed Whisper Leak, a vulnerability that exposes topics in encrypted AI chat services affecting nearly all tested models, Microsoft researchers said.
- Although TLS encrypts messages, Microsoft researchers found it leaves metadata about how messages travel visible, enabling the exploit without breaking TLS itself.
- Testing on 28 LLMs showed researchers trained classifiers on recorded network rhythms, achieving over 98% accuracy and 100% detection at 1 in 10,000 sensitive conversations.
- Following the disclosure, OpenAI, Mistral and xAI deployed mitigations while Microsoft advised users to avoid public Wi‑Fi, use a VPN, or choose non‑streaming models; our findings highlight the need to address metadata leakage.
- Former military and security officials warn prompt injection and spoofing could let adversaries steal files or spread falsehoods, while traditional defenses miss side‑channel leaks like in the 2024 incident exposing over 300,000 files.
12 Articles
12 Articles
Microsoft finds security flaw in AI chatbots that could expose conversation topics
Your conversations with AI assistants such as ChatGPT and Google Gemini may not be as private as you think they are. Microsoft has revealed a serious flaw in the large language models (LLMs) that power these AI services, potentially exposing the topic of your conversations with them. Researchers dubbed the vulnerability "Whisper Leak" and found it affects nearly all the models they tested.
Artificial intelligence assistants, the engine force behind the cyber security revolution, have created an entry point for hackers stealing, closing or modifying user data, alerting cybernetic security specialists. IA assistants are computer programs that use exchangeable robots, or chatbots, to perform tasks that humans do online, such as buying an air pass or adding events to a calendar.
AI assistants, protagonists of the revolution in this sector, have created a gateway to hackers to steal, delete or modify user data, warn cyber security experts. AI assistants are computer programs that use conversational robots, chatbots, to perform tasks that humans do online, such as buying a plane ticket or adding events to a calendar.You may be interestedTechnologyArtificial intelligence market in Mexico will reach a value of 32.884 millio…
Microsoft Discovers Vulnerability That Lets Hackers See ChatGPT and Gemini’s Conversation Topics
Microsoft has discovered a new vulnerability, called Whisper Leak, that reportedly affects most server-based AI chatbots, including ChatGPT and Gemini. The flaw enables attackers to infer conversation topics through side-channel attacks by analysing encrypted network traffic metadata. Microsoft says it worked with vendors like OpenAI, Mistral, and xAI to deploy mitigations and strengthen user privacy protections.
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium










