🤖 AI Summary
This work addresses the limited reasoning capabilities of large language models (LLMs) in mathematical, programming, and general reasoning tasks by proposing MiMo, a reasoning-optimized 7B model. Methodologically, during pretraining, we introduce data augmentation, a three-stage hybrid data curation strategy, and a multi-token prediction (MTP) objective to accelerate logical reasoning acquisition. In post-training, we design a difficulty-aware reinforcement learning framework based on verifiable mathematical and programming problems, employing Proximal Policy Optimization (PPO) with dynamic resampling and test-case difficulty-driven code reward shaping to mitigate sparse reward challenges. This constitutes the first paradigm integrating pretraining and post-training to jointly enhance reasoning performance. Experiments show that MiMo-7B-Base surpasses 32B baselines across multiple reasoning benchmarks; MiMo-7B-RL outperforms OpenAI o1-mini comprehensively on mathematical reasoning, code generation, and general reasoning tasks.
📝 Abstract
We present MiMo-7B, a large language model born for reasoning tasks, with optimization across both pre-training and post-training stages. During pre-training, we enhance the data preprocessing pipeline and employ a three-stage data mixing strategy to strengthen the base model's reasoning potential. MiMo-7B-Base is pre-trained on 25 trillion tokens, with additional Multi-Token Prediction objective for enhanced performance and accelerated inference speed. During post-training, we curate a dataset of 130K verifiable mathematics and programming problems for reinforcement learning, integrating a test-difficulty-driven code-reward scheme to alleviate sparse-reward issues and employing strategic data resampling to stabilize training. Extensive evaluations show that MiMo-7B-Base possesses exceptional reasoning potential, outperforming even much larger 32B models. The final RL-tuned model, MiMo-7B-RL, achieves superior performance on mathematics, code and general reasoning tasks, surpassing the performance of OpenAI o1-mini. The model checkpoints are available at https://github.com/xiaomimimo/MiMo.