MiniMax M1 Model Claims Chinese LLM Crown From DeepSeek
- Shanghai-Based AI firm MiniMax released its open-source M1 model on June 16, 2025, challenging leading Chinese and global competitors.
- MiniMax introduced M1 to outperform DeepSeek's earlier R1 model by offering a larger 1 million token context window and greater computational efficiency.
- M1 incorporates a Lightning Attention mechanism that enables it to handle input lengths eight times greater than DeepSeek R1 while utilizing only roughly one-third of its computational resources, with training costs estimated at $537,400.
- MiniMax boasts that M1 ranks top among open-source models on benchmarks like AIME 2024 and rivals proprietary models from OpenAI, Anthropic, and Google, with the source code open on GitHub.
- MiniMax aims to expand via a potential Hong Kong IPO as it leverages backing from Alibaba, Tencent, and IDG Capital amid intensifying competition with DeepSeek and Western AI labs.
11 Articles
11 Articles
Alibaba, Tencent-backed AI unicorn MiniMax eyes Hong Kong listing, report says
Chinese artificial intelligence (AI) start-up MiniMax, which counts Alibaba Group Holding and Tencent Holdings as investors, is moving towards an initial public offering (IPO) in Hong Kong as early as this year, according to media reports. Shanghai-based MiniMax, whose last funding round in March 2024 valued the company at US$2.5 billion, was in the “early stage” of preparations for the listing, according to a Wednesday report by Shanghai-based …
China’s MiniMax LLM costs about 200x less to train than OpenAI’s GPT-4, says company
It’s becoming a familiar pattern: every few months, an AI lab in China that most people in the U.S. have never heard of releases an AI model that upends conventional wisdom about the cost of training and running cutting edge AI.In January, it was DeepSeek’s R1 that took the world by storm. Then in March, it was a startup called Butterfly Effect—technically based in Singapore but with most of its team in China—and its “agentic AI” model Manus tha…
: China's 'little dragons' pose big challenge to US AI firms
China’s AI Challenger MiniMax: Open Models, Multi-modal Products, and IPO Drive
Pioneering Open Models: From Abab to M1Shanghai-based AI startup MiniMax has quickly emerged as a model-driven company, pushing the boundaries of large language models (LLMs). In January 2024, it launched Abab6, China’s first Mixture-of-Experts (MoE) LLM , and followed up in April with Abab 6.5, a trillion-parameter model supporting 200,000-token context lengths . The Abab 6.5 series (including an optimized 6.5s variant) was shown to approach th…
Coverage Details
Bias Distribution
- 100% of the sources are Center
To view factuality data please Upgrade to Premium