NVIDIA Debuts Nemotron 3 Family of Open Models
NVIDIA’s Nemotron 3 family offers up to 4x higher token throughput and 60% lower inference costs, enabling scalable, efficient multi-agent AI with open-source tools and datasets.
- On 12/15/2025, NVIDIA announced the Nemotron 3 family—Nano, Super and Ultra—of open models, data and libraries to power efficient agentic AI across industries.
- To support customization, NVIDIA released NeMo Gym and NeMo RL and pledged three trillion tokens of datasets to aid developers building specialized AI agents.
- Technically, the models employ a hybrid latent mixture-of-experts architecture, achieving up to 4x higher token throughput and 60% less reasoning-token generation versus Nemotron 2 Nano, while Nemotron 3 Super and Ultra use 4-bit NVFP4 training format on Blackwell.
- All tools and datasets are available on GitHub and Hugging Face, Nemotron 3 Nano is accessible via Hugging Face, inference providers, and NVIDIA NIM microservice, while early adopters like Accenture and Palantir integrate the models.
- With long-context capability, NVIDIA aims to support sovereign AI efforts from Europe to South Korea with Nemotron 3 models featuring a 1 million token context window and Nemotron 3 Ultra as an advanced reasoning engine.
26 Articles
26 Articles
Expanded Clinical Experience Demonstrates Additional Long-Term Survival and Radiographic Remission in Recurrent Grade III/IV IDH1-Mutant Astrocytoma Treated with Intranasal NEO100...
Nvidia debuts Nemotron 3 with hybrid MoE and Mamba-Transformer to drive efficient agentic AI
Nvidia launched the new version of its frontier models, Nemotron 3, by leaning in on a model architecture that the world’s most valuable company said offers more accuracy and reliability for agents. Nemotron 3 will be available in three sizes: Nemotron 3 Nano with 30B parameters, mainly for targeted, highly efficient tasks; Nemotron 3 Super, which is a 100B parameter model for multi-agent applications and with high-accuracy reasoning and Nemotro…
NVIDIA AI Releases Nemotron 3: A Hybrid Mamba Transformer MoE Stack for Long Context Agentic AI – #CryptoUpdatesGNIT
NVIDIA has released the Nemotron 3 family of open models as part of a full stack for agentic AI, including model weights, datasets and reinforcement learning tools. The family has three sizes, Nano, Super and Ultra, and targets multi agent systems that need long context reasoning with tight control over inference cost. Nano has about 30 billion parameters with about 3 billion active per token, Super has about 100 billion parameters with up to 10…
NVIDIA Launches Nemotron 3: Open Models For Agentic AI
NVIDIA has launched the Nemotron 3 family of open models, offering leading accuracy and efficiency for building agentic AI applications. These models—in Nano, Super, and Ultra sizes—introduce a new hybrid architecture to help developers create reliable, scalable, and transparent multi-agent systems.
NVIDIA Expands Its Role in AI with Nemotron 3 Open Model Family
What to knowNVIDIA has unveiled the Nemotron 3 lineup, including Nano, Super, and Ultra variants, aimed at enabling efficient, transparent development of agentic and multi-agent AI systems.Nemotron 3 Nano delivers up to four times higher throughput than earlier versions, supports a one-million-token context window, and is optimized for long-context and reasoning-intensive workloads.The release is accompanied by open datasets, reinforcement learn…
Coverage Details
Bias Distribution
- 67% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium








