MiniMax-M1 comes close to Gemini 2.5 Pro efficiency when handling large context windows

2 weeks ago 5
ARTICLE AD BOX

Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.

The Chinese AI startup MiniMax has released MiniMax-M1, a new open-source language model designed to outperform Deepseek's R1. MiniMax-M1 is a reasoning-focused model with a massive context window of up to one million tokens and a "thinking" budget of up to 80,000 tokens. The model uses an especially efficient reinforcement learning approach, making it much leaner than other open-source options. It's available for free under the Apache-2.0 license. In benchmark tests, MiniMax-M1 outperforms other open models like DeepSeek-R1-0528 and Qwen3-235B-A22B in several categories. On the OpenAI MRCR test, which measures complex, multi-step reasoning across long texts, M1's performance comes close to the leading closed model, Gemini 2.5 Pro. While proprietary models like OpenAI o3 and Gemini 2.5 Pro still hold an edge in some areas, MiniMax-M1 has narrowed the gap significantly. The model is available in two versions on Hugging Face.

Image: MiniMax

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:

Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.

Read Entire Article
LEFT SIDEBAR AD

Hidden in mobile, Best for skyscrapers.