MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal mathematical reasoning approaches struggle to deeply integrate visual and textual chain-of-thought (CoT) reasoning due to coarse-grained rectangular region partitioning, weak perception of mathematical symbols/structures by visual encoders, and reliance on external tools for image modification. Method: We propose the Mathematical Interleaved Token (MINT) mechanism, which dynamically selects arbitrary-shaped, fine-grained visual regions and interleaves them stepwise into textual CoT reasoning. We introduce a novel 54K-sample, step-level visually grounded mathematical dataset with precise region alignment; design a three-stage training strategy—textual CoT warm-up, interleaved supervised fine-tuning, and interleaved reinforcement learning; and develop the MINT-CoT-7B architecture. Results: Our method achieves +34.08% on MathVista, +28.78% on GeoQA, and +23.2% on MMStar over prior state-of-the-art. Code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Chain-of-Thought (CoT) has widely enhanced mathematical reasoning in Large Language Models (LLMs), but it still remains challenging for extending it to multimodal domains. Existing works either adopt a similar textual reasoning for image input, or seek to interleave visual signals into mathematical CoT. However, they face three key limitations for math problem-solving: reliance on coarse-grained box-shaped image regions, limited perception of vision encoders on math content, and dependence on external capabilities for visual modification. In this paper, we propose MINT-CoT, introducing Mathematical INterleaved Tokens for Chain-of-Thought visual reasoning. MINT-CoT adaptively interleaves relevant visual tokens into textual reasoning steps via an Interleave Token, which dynamically selects visual regions of any shapes within math figures. To empower this capability, we construct the MINT-CoT dataset, containing 54K mathematical problems aligning each reasoning step with visual regions at the token level, accompanied by a rigorous data generation pipeline. We further present a three-stage MINT-CoT training strategy, progressively combining text-only CoT SFT, interleaved CoT SFT, and interleaved CoT RL, which derives our MINT-CoT-7B model. Extensive experiments demonstrate the effectiveness of our method for effective visual interleaved reasoning in mathematical domains, where MINT-CoT-7B outperforms the baseline model by +34.08% on MathVista, +28.78% on GeoQA, and +23.2% on MMStar, respectively. Our code and data are available at https://github.com/xinyan-cxy/MINT-CoT
Problem

Research questions and friction points this paper is trying to address.

Extending Chain-of-Thought reasoning to multimodal math domains
Overcoming limitations in visual perception for math problem-solving
Enhancing dynamic visual token interleaving in mathematical reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptively interleaves visual tokens into reasoning
Dynamically selects visual regions of any shapes
Three-stage training strategy for model derivation
🔎 Similar Papers
No similar papers found.