Get access to our best features
Get access to our best features
Published 2 years ago

Busting homophobic, anti-queer bias in AI language models

Summary by Ground News
Artificial intelligence large language models are notoriously biased, but can be fine-tuned to become more inclusive. Engineering and journalism researchers from the University of Southern California in the United States have teamed up to quantify and fix anti-queer bias in AI language models. Katy Felkner, a PhD student in computer science specialising in natural language processing says the problem of bias in language models is well documented.

0 Articles

All
Left
Center
Right
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe
Ground News Article Assistant
Not enough coverage to generate an Article Assistant.

Bias Distribution

  • 100% of the sources are Center
100% Center
Factuality

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

Sources are mostly out of (0)