Skip to main content
See every side of every news story
Published loading...Updated

AI Chatbots Are Exposing People's Real Phone Numbers

Researchers say the chatbots can expose personal data from public records and training data, sometimes after users narrow the search with follow-up prompts.

  • AI chatbots like ChatGPT, Gemini, and Grok are increasingly revealing private contact details, surfacing real phone numbers despite privacy guardrails, Eileen Guo at MIT Technology Review reports.
  • Training sets containing FOIA requests and city property documents often inadvertently expose sensitive information, as models are designed to be effective and answer customer questions.
  • Testing by students Eiger, Gilbert, and Anna-Maria Gueorguieva revealed that ChatGPT could surface home addresses after suggesting it could "narrow things down," while journalist Matt Novak also found chatbots identifying his personal number.
  • OpenAI representative Taya Christianson declined to comment on specific cases, while xAI did not respond to requests; experts note there are no straightforward ways to compel models to remove PII.
  • Privacy norms have shifted significantly over the past 20 years, moving from open 20th-century phone books to today's closely guarded secrets, making the current inability to remove personal data from AI models a fundamental problem.
Insights by Ground AI

10 Articles

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 67% of the sources lean Left
67% Left

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

The Independent broke the news in London, United Kingdom on Sunday, May 10, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal