MiniMax-M1 is a new open source model with 1 MILLION TOKEN context and new, hyper efficient reinforcement learning
4 Articles
4 Articles
MiniMax-M1 is a new open source model with 1 MILLION TOKEN context and new, hyper efficient reinforcement learning
MiniMax-M1 presents a flexible option for organizations looking to experiment with or scale up advanced AI capabilities while managing costs.
The training of Minimax M1 is said to have cost just a little more than $500,000. The LLM is open to Github. (AI, graphs)
MiniMax-M1 comes close to Gemini 2.5 Pro efficiency when handling large context windows
The Chinese AI startup MiniMax has released MiniMax-M1, a new open-source language model designed to outperform Deepseek's R1. The article MiniMax-M1 comes close to Gemini 2.5 Pro efficiency when handling large context windows appeared first on THE DECODER.
The Chinese startup MiniMax, known mainly through its AI video generator Hailuo, has now released a large language model with MiniMax-M1 under the Apache 2 license, which has a context window of 1 million input tokens and up to 80,000 output tokens. The context window in large language models (LLMs) denotes the maximum number of tokens that the model can process at the same time.
Coverage Details
Bias Distribution
- 100% of the sources are Center
To view factuality data please Upgrade to Premium