Meta Returns to Open Source AI with Omnilingual ASR Models that Can Transcribe 1,600+ Languages Natively
Meta's Omnilingual ASR supports transcription in 1,600+ languages with under 10% error in 78% of cases and extends to 5,400+ languages using zero-shot learning.
- Released on November 10, Meta released Omnilingual ASR and open-sourced it under Apache 2.0, supporting more than 1,600 languages out of the box.
- Amid a strategic AI overhaul, Meta's leadership shifted after Llama 4's poor reception, with Mark Zuckerberg appointing Alexandr Wang as Chief AI Officer to reset the company.
- Technically, the suite includes multiple model families trained on more than 4.3 million hours and features Omnilingual wav2vec 2.0 plus LLM-ZeroShot that adapts at inference.
- For enterprises, Omnilingual ASR lowers barriers for multilingual speech applications and offers PyPI, Hugging Face access, plus Apache 2.0 licensing for deployment without restrictive terms.
- The release includes the Omnilingual ASR Corpus under CC-BY, a 3,350-hour dataset with local partners such as African Next Voices and Mozilla Common Voice, covering over 500 new languages.
13 Articles
13 Articles
Meta returns to open source AI with Omnilingual ASR models that can transcribe 1,600+ languages natively
Meta has just released a new multilingual automatic speech recognition (ASR) system supporting 1,600+ languages — dwarfing OpenAI’s open source Whisper model, which supports just 99. Is architecture also allows developers to extend that support to thousands more. Through a feature called zero-shot in-context learning, users can provide a few paired examples of audio and text in a new language at inference time, enabling the model to transcribe a…
Meta’s New AI Technology To Understand 1,600+ Languages Worldwide
Meta, the technology giant behind Facebook and Instagram, has made a major breakthrough in artificial intelligence with the launch of Omnilingual ASR, a powerful speech recognition system that can understand and transcribe over 1,600 languages worldwide. This opens up huge possibilities for people speaking less well-known languages to use AI tools in their everyday lives. Breaking New Ground in Language Technology Meta’s Fundamental AI Research …
After the disappointing launch of Llama 4, Meta now wants to catch up with a new language system in the AI race. The automatic multilingual model family covers more languages than all models before. read more on t3n.de
Meta brings speech-to-text transcription to more than 1,600 languages, 500 of them for the first time
Meta has presented a model with automatic speech recognition capabilities for more than 1,600 languages, including the least represented ones, which it considers “a significant advance towards a truly universal transcription system“. The technology company has presented new tools with which it seeks to reduce the gap that exists with automatic speech recognition technology, so […]
Coverage Details
Bias Distribution
- 67% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium








