Ai2 Releases Olmo 3 Open Models, Rivaling Meta, DeepSeek and Others on Performance and Efficiency
Ai2's Olmo 3 open-source LLMs improve reasoning, coding, and compute efficiency with enterprise transparency and customization, supporting 65,000-token context windows, Ai2 said.
7 Articles
7 Articles
Ai2's Olmo 3 Pushes the Envelope for Open Source LLM Performance
The Allen Institute of AI (Ai2) today launched Olmo 3, the latest in its family of state-of-the-art, open source large language models (LLMs). While the terms open source and LLM have a somewhat complex relationship, Ai2 has, together with Stanford’s Marin models and the Swiss Apertus models, led the charge in transparency for how the models are trained, including the data and recipes the team used. Olmo 3, Ai2 argues, outperforms many other ope…
Ai2 releases Olmo 3 open models, rivaling Meta, DeepSeek and others on performance and efficiency
The Allen Institute for AI (Ai2) unveiled Olmo 3, a new generation of open language models that it says outperforms rivals while using far less computing and offering more transparency.
Ai2’s Olmo 3 family challenges Qwen and Llama with efficient, open reasoning and customization
The Allen Institute for AI (Ai2) hopes to take advantage of an increased demand for customized models and enterprises seeking more transparency from AI models with its latest release.Ai2 made the latest addition to its Olmo family of large language models available to organizations, continuing to focus on openness and customization. Olmo 3 has a longer context window, more reasoning traces and is better at coding than its previous iteration. Thi…
Allen Institute for AI (AI2) Introduces Olmo 3: An Open Source 7B and 32B LLM Family Built on the Dolma 3 and Dolci Stack
Allen Institute for AI (AI2) is releasing Olmo 3 as a fully open model family that exposes the entire ‘model flow’, from raw data and code to intermediate checkpoints and deployment ready variants. Olmo 3 is a dense transformer suite with 7B and 32B parameter models. The family includes Olmo 3-Base, Olmo 3-Think, Olmo 3-Instruct, and Olmo 3-RL Zero. Both 7B and 32B variants share a context length of 65,536 tokens and use the same staged training…
OLMo 3 debuts as the first fully open "thinking" model with step-by-step logic exposed to users
The Allen Institute for AI (Ai2) has launched OLMo 3, a new line of fully open AI models. This release includes the first open 32B "thinking" model, designed to make its reasoning process visible while running 2.5 times more efficiently than similar models. The article OLMo 3 debuts as the first fully open "thinking" model with step-by-step logic exposed to users appeared first on THE DECODER.
Coverage Details
Bias Distribution
- 75% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium



