MiniMax unveils its own open source LLM with industry-leading 4M token context
6 Articles
6 Articles
MiniMax-Text-01 and MiniMax-VL-01 Released: Scalable Models with Lightning Attention, 456B Parameters, 4M Token Contexts, and State-of-the-Art Accuracy
Large Language Models (LLMs) and Vision-Language Models (VLMs) transform natural language understanding, multimodal integration, and complex reasoning tasks. Yet, one critical limitation remains: current models cannot efficiently handle extremely large contexts. This challenge has prompted researchers to explore new methods and architectures to improve these models’ scalability, efficiency, and performance. Existing models typically support toke…
MiniMax Unveils Open-Source AI Models Featuring Lightning Attention for Ultra-Long Contexts
Shanghai-based AI startup MiniMax announced the release and open-sourcing of its next-generation MiniMax-01 series models. This release includes the foundational language model MiniMax-Text-01 and the visual multimodal model MiniMax-VL-01.
MiniMax introduces AI models with record context length for agents with 'long term memory'
MiniMax, a Chinese AI startup, has released its MiniMax-01 family of open-source models. The company says its MiniMax-Text-01 can handle contexts up to 4 million tokens - double the capacity of its closest competitor. The article MiniMax introduces AI models with record context length for agents with 'long term memory' appeared first on THE DECODER.
MiniMax Releases Open-Source Model with Massive 4M Context Window
In what can truly be described as a significant advancement for the AI ecosystem, MiniMax Research has introduced MiniMax-01, a new series of open-source models which include MiniMax-Text-01, a language model, and MiniMax-VL-01, a visual multimodal model. These models not only rival top-tier AI systems in performance but also introduce a novel architecture capable of processing contexts up to 4 million tokens, setting a new benchmark for languag…
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium