DeepSeek Releases First Open AI Model with Gold-Level Scores at Maths Olympiad
DeepSeek-Math-V2, an open-source model, achieved gold-level scores on IMO 2025 and scored 118/120 on Putnam 2024, advancing self-verifiable mathematical reasoning AI.
10 Articles
10 Articles
DeepseekMath-V2 is Deepseek's latest attempt to pop the US AI bubble
Chinese startup Deepseek reports its new DeepseekMath-V2 model has reached gold medal status at the Math Olympiad, keeping the company in tight competition with Western AI labs. The article DeepseekMath-V2 is Deepseek's latest attempt to pop the US AI bubble appeared first on THE DECODER.
The Chinese company DeepSeek, which earlier this year put its artificial intelligence competition in a tight spot for its affordable method of training its language model, has launched DeepSeek-Math-V2, an open-source reasoning model that has achieved gold medal performances at the International Mathematical Olympiad (IMO) of 2025. The fact that this AI is put to the level of other cutting-edge mathematical reasoning and that it is open-source n…
DeepSeek’s Math-V2 AI Model Self-Checks And Solves Olympiad-Level Problems
Artificial intelligence built for math has entered a new phase. DeepSeek’s latest open-weight system, DeepSeek-Math-V2, doesn’t just solve Olympiad-level problems—it checks its own work, corrects its reasoning, and generates theorems that can be independently verified. This shift toward self-verifiable mathematical reasoning could redefine how researchers build, test, and trust AI systems. It also pushes China’s rapidly expanding open-source AI …
DeepSeek AI Releases DeepSeekMath-V2: The Open Weights Maths Model That Scored 118/120 on Putnam 2024
How can an AI system prove complex olympiad level math problems in clear natural language while also checking that its own reasoning is actually correct? DeepSeek AI has released DeepSeekMath-V2, an open weights large language model that is optimized for natural language theorem proving with self verification. The model is built on DeepSeek-V3.2-Exp-Base, runs as a 685B parameter mixture of experts, and is available on Hugging Face under an Apac…
Coverage Details
Bias Distribution
- 50% of the sources lean Left, 50% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium







