🤖 AI Summary
Current multimodal large language models (MLLMs) exhibit significant limitations in complex mathematical reasoning, primarily due to the absence of knowledge-driven system design and model-centric data modeling. To address this, we propose MathBook: a principled framework featuring a five-layer structured mathematical knowledge hierarchy; the release of MathBook-Pro—a high-quality, difficulty-graded dataset covering 491 mathematical concepts; and a two-stage reinforcement learning paradigm integrating knowledge-guided chain-of-thought reasoning, average-reward optimization, and dynamic data scheduling for cross-difficulty progressive alignment. Evaluated on four established benchmarks and the newly introduced MathBookEval suite, our approach achieves substantial gains in mathematical reasoning performance, demonstrating strong generalization and robustness. Our core contribution lies in the first unified integration of structured knowledge modeling, model-centric data space construction, and reinforcement learning–based training within a multimodal mathematical reasoning system.
📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated impressive capabilities across various tasks, but still struggle with complex mathematical reasoning. Existing research primarily focuses on dataset construction and method optimization, often overlooking two critical aspects: comprehensive knowledge-driven design and model-centric data space modeling. In this paper, we introduce We-Math 2.0, a unified system that integrates a structured mathematical knowledge system, model-centric data space modeling, and a reinforcement learning (RL)-based training paradigm to comprehensively enhance the mathematical reasoning abilities of MLLMs. The key contributions of We-Math 2.0 are fourfold: (1) MathBook Knowledge System: We construct a five-level hierarchical system encompassing 491 knowledge points and 1,819 fundamental principles. (2) MathBook-Standard & Pro: We develop MathBook-Standard, a dataset that ensures broad conceptual coverage and flexibility through dual expansion. Additionally, we define a three-dimensional difficulty space and generate 7 progressive variants per problem to build MathBook-Pro, a challenging dataset for robust training. (3) MathBook-RL: We propose a two-stage RL framework comprising: (i) Cold-Start Fine-tuning, which aligns the model with knowledge-oriented chain-of-thought reasoning; and (ii) Progressive Alignment RL, leveraging average-reward learning and dynamic data scheduling to achieve progressive alignment across difficulty levels. (4) MathBookEval: We introduce a comprehensive benchmark covering all 491 knowledge points with diverse reasoning step distributions. Experimental results show that MathBook-RL performs competitively with existing baselines on four widely-used benchmarks and achieves strong results on MathBookEval, suggesting promising generalization in mathematical reasoning.