Training Multimodal Large Reasoning Models Needs Better Thoughts: A Three-Stage Framework for Long Chain-of-Thought Synthesis and Selection

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-quality, long-chain-of-thought (CoT) data for multimodal large language models (MLLMs) is scarce; existing approaches suffer from shallow reasoning depth, cross-modal inconsistency, and rigid generation pipelines. Method: We propose SynSelect, a three-stage framework: (1) collaborative generation of diverse CoTs using heterogeneous multimodal reasoning models; (2) joint instance- and batch-level filtering to dynamically select high-quality, diverse samples across granularities; and (3) hybrid optimization via supervised fine-tuning (SFT) and reinforcement learning (RL). Contribution/Results: SynSelect introduces the first “synthesis–selection” co-design paradigm, overcoming limitations of single-model generation and static pipelines. On multiple multimodal benchmarks, SFT alone achieves state-of-the-art (SOTA) performance; further RL integration yields consistent gains. Empirical results demonstrate that the synthesized long-CoT data substantially enhances deep reasoning capabilities and cross-modal alignment in MLLMs.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) have demonstrated remarkable performance on complex reasoning tasks through long Chain-of-Thought (CoT) reasoning. Extending these successes to multimodal reasoning remains challenging due to the increased complexity of integrating diverse input modalities and the scarcity of high-quality long CoT training data. Existing multimodal datasets and CoT synthesis methods still suffer from limited reasoning depth, modality conversion errors, and rigid generation pipelines, hindering model performance and stability. To this end, in this paper, we propose SynSelect, a novel three-stage Synthesis-Selection framework for generating high-quality long CoT data tailored to multimodal reasoning tasks. Specifically, SynSelect first leverages multiple heterogeneous multimodal LRMs to produce diverse candidate CoTs, and then applies both instance and batch level selection to filter high-quality CoTs that can effectively enhance the model's reasoning capabilities. Extensive experiments on multiple multimodal benchmarks demonstrate that models supervised fine-tuned on SynSelect-generated data significantly outperform baselines and achieve further improvements after reinforcement learning post-training. Our results validate SynSelect as an effective approach for advancing multimodal LRMs reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Generates high-quality long Chain-of-Thought data for multimodal reasoning
Selects diverse candidate thoughts to enhance model reasoning capabilities
Improves multimodal Large Reasoning Models' performance and stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-stage framework for multimodal Chain-of-Thought synthesis
Heterogeneous models generate diverse candidate reasoning chains
Instance and batch selection filter high-quality training data
🔎 Similar Papers
No similar papers found.
Y
Yizhi Wang
School of Computer Science and Engineering, Southeast University; Key Laboratory of Computer Network and Information Integration (SEU), Ministry of Education
Linan Yue
Linan Yue
Southeast University
Trustworthy AINatural Language Processing
Min-Ling Zhang
Min-Ling Zhang
Professor, School of Computer Science and Engineering, Southeast University, China
Artificial IntelligenceMachine LearningData Mining