Meta plans to automate many of its product risk assessments
- Meta plans to automate up to 90% of its product risk assessments using AI to speed reviews and release updates more quickly in 2025.
- The automation shift stems from a 2012 FTC agreement requiring privacy reviews and Meta's intent to streamline decision-making amid growing competition.
- Under the new system, product teams complete a questionnaire reviewed by AI, which provides instant decisions including identified risks and update requirements.
- While Meta emphasizes retaining human oversight for novel issues and claims billions invested in privacy, some insiders warn automation reduces scrutiny and raises risks of harms.
- The changes suggest faster innovation and less rigorous review at Meta, potentially increasing negative externalities before problems emerge, while complying with evolving regulations such as the EU Digital Services Act.
41 Articles
41 Articles
If You Thought Facebook Was Toxic Already, Now It's Replacing Its Human Moderators with AI
Few companies in the history of capitalism have amassed as much wealth and influence as Meta. A global superpower in the information space, Meta — the parent company of Facebook, Instagram, WhatsApp, and Threads — has a market cap of $1.68 trillion at the time of writing, which for a rough sense of scale is more than the gross domestic product of Spain. In spite of its immense influence, none of its internal algorithms can be scrutinized by pub…
Meta reportedly replacing human risk assessors with AI
According to new internal documents review by NPR, Meta is allegedly planning to replace human risk assessors with AI, as the company edges closer to complete automation.Historically, Meta has relied on human analysts to evaluate the potential harms posed by new technologies across its platforms, including updates to the algorithm and safety features, part of a process known as privacy and integrity reviews. But in the near future, these essenti…
Could there be privacy concerns? Could it be harmful to children? This will be investigated by artificial intelligence in the future.
Coverage Details
Bias Distribution
- 73% of the sources lean Left
To view factuality data please Upgrade to Premium