Meta Chatbot Child Safety, Internal Tests Reveal Failure
3 Articles
3 Articles
Meta Chatbot Child Safety, Internal Tests Reveal Failure
Internal testing documents from Meta reveal that an unreleased chatbot product failed to adequately protect minors from sexual exploitation in nearly 70 percent of test cases, according to court testimony presented Monday in New Mexico’s child exploitation lawsuit against the tech giant. The evidence surfaced during proceedings in a case brought by Raúl Torrez. The state alleges that Meta made design decisions that left children vulnerable to on…
New Mexico Lawsuit: Meta's Internal Tests Reveal AI Failed Child Safety Checks
Internal testing documents from Mark Zuckerberg's Meta show that an unreleased chatbot product failed to protect minors from sexual exploitation in nearly 70 percent of test scenarios, according to court testimony presented on Monday as part of New Mexico's child exploitation lawsuit against the internet giant. The post New Mexico Lawsuit: Meta’s Internal Tests Reveal AI Failed Child Safety Checks appeared first on Breitbart.
Coverage Details
Bias Distribution
- 100% of the sources lean Right
Factuality
To view factuality data please Upgrade to Premium


