Skip to main content
See every side of every news story
Published loading...Updated

Meta Returns to Open Source AI with Omnilingual ASR Models that Can Transcribe 1,600+ Languages Natively

Meta's Omnilingual ASR supports transcription in 1,600+ languages with under 10% error in 78% of cases and extends to 5,400+ languages using zero-shot learning.

  • Released on November 10, Meta released Omnilingual ASR and open-sourced it under Apache 2.0, supporting more than 1,600 languages out of the box.
  • Amid a strategic AI overhaul, Meta's leadership shifted after Llama 4's poor reception, with Mark Zuckerberg appointing Alexandr Wang as Chief AI Officer to reset the company.
  • Technically, the suite includes multiple model families trained on more than 4.3 million hours and features Omnilingual wav2vec 2.0 plus LLM-ZeroShot that adapts at inference.
  • For enterprises, Omnilingual ASR lowers barriers for multilingual speech applications and offers PyPI, Hugging Face access, plus Apache 2.0 licensing for deployment without restrictive terms.
  • The release includes the Omnilingual ASR Corpus under CC-BY, a 3,350-hour dataset with local partners such as African Next Voices and Mozilla Common Voice, covering over 500 new languages.
Insights by Ground AI

13 Articles

After the disappointing launch of Llama 4, Meta now wants to catch up with a new language system in the AI race. The automatic multilingual model family covers more languages than all models before. read more on t3n.de

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 67% of the sources lean Left
67% Left

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

the-decoder.com broke the news in on Monday, November 10, 2025.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal