Published 3 hours ago • loading... • Updated 3 hours ago
AI Chatbots Are Exposing People's Real Phone Numbers
Researchers say the chatbots can expose personal data from public records and training data, sometimes after users narrow the search with follow-up prompts.
AI chatbots like ChatGPT, Gemini, and Grok are increasingly revealing private contact details, surfacing real phone numbers despite privacy guardrails, Eileen Guo at MIT Technology Review reports.
Training sets containing FOIA requests and city property documents often inadvertently expose sensitive information, as models are designed to be effective and answer customer questions.
Testing by students Eiger, Gilbert, and Anna-Maria Gueorguieva revealed that ChatGPT could surface home addresses after suggesting it could "narrow things down," while journalist Matt Novak also found chatbots identifying his personal number.
OpenAI representative Taya Christianson declined to comment on specific cases, while xAI did not respond to requests; experts note there are no straightforward ways to compel models to remove PII.
Privacy norms have shifted significantly over the past 20 years, moving from open 20th-century phone books to today's closely guarded secrets, making the current inability to remove personal data from AI models a fundamental problem.