Skip to main content
See every side of every news story
Published loading...Updated

Your AI Chats May Not Be Private: Microsoft Study Finds Conversation Topics Can Be Inferred from Network Data

A new Microsoft study shows that conversations with AI assistants such as ChatGPT, Gemini, and Claude may not be as private as users assume. The research uncovers a vulnerability named Whisper Leak that lets observers infer what people are talking about with these systems, even when the chats are protected by encryption. The problem does not come from a flaw in encryption itself but from the way data travels between the user and the language mod…
DisclaimerThis story is only covered by news sources that have yet to be evaluated by the independent media monitoring agencies we use to assess the quality and reliability of news outlets on our platform. Learn more here.Cross Cancel Icon

2 Articles

Discord has become one of the most used platforms by teenagers in Spain and Latin America, offering chats, servers and communities open to any interest. The ease of creating groups and adding bots has boosted their growth, but it has also opened the door to invisible risks. Artificial intelligence bots, programmed to interact naturally, can earn the confidence of a minor and turn everyday talk into data extraction without anyone noticing it. The…

Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • There is no tracked Bias information for the sources covering this story.

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

digitalinformationworld.com broke the news in on Tuesday, November 11, 2025.
Too Big Arrow Icon
Sources are mostly out of (0)
News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal