Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks
- Thousands of American schools are using AI-powered surveillance for 24/7 monitoring of student accounts and devices due to safety concerns amid a mental health crisis and threats of violence.
- A joint investigation revealed that almost 3,500 sensitive student documents were accessible without security measures, indicating major privacy concerns.
- Advocates warn that AI surveillance poses unique risks to LGBTQ+ students, with instances of students being outed to school officials or families.
- Many parents are unaware of surveillance software in schools, as these institutions frequently do not clearly disclose their use of such technology.
89 Articles
89 Articles
Schools use AI to monitor kids, hoping to prevent harm. An investigation found security risks.
Schools are turning to AI-powered surveillance technology to monitor students on school-issued devices to help keep them safe. But that is raising questions about privacy and security.
Student privacy vs. safety: The AI surveillance dilemma in WA schools
One student asked a search engine, “Why does my boyfriend hit me?” Another threatened suicide in an email to an unrequited love. A gay teen opened up in an online diary about struggles with homophobic parents, writing they just wanted to be themselves.
Coverage Details
Bias Distribution
- 43% of the sources lean Left
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage