From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from opaque reasoning paths and poor generalization in complex multimodal reasoning tasks. Method: This paper presents the first systematic survey of Multimodal Chain-of-Thought (MCoT) techniques and their theoretical foundations, proposing a cross-modal reasoning transparency framework. It unifies multimodal perception and logical reasoning by integrating post-training optimization with MCoT paradigms during inference, and establishes a comprehensive analytical framework covering mainstream methods and evaluation benchmarks. Contributions/Results: We introduce the first taxonomy and evaluation framework for MCoT, identify critical bottlenecks—including modality alignment bias and reasoning chain fragmentation—and propose a dual-path optimization strategy to enhance both interpretability and generalization. This work provides a systematic foundation for advancing MCoT’s theoretical rigor and practical deployment.

Technology Category

Application Category

📝 Abstract
With the remarkable success of Multimodal Large Language Models (MLLMs) in perception tasks, enhancing their complex reasoning capabilities has emerged as a critical research focus. Existing models still suffer from challenges such as opaque reasoning paths and insufficient generalization ability. Chain-of-Thought (CoT) reasoning, which has demonstrated significant efficacy in language models by enhancing reasoning transparency and output interpretability, holds promise for improving model reasoning capabilities when extended to the multimodal domain. This paper provides a systematic review centered on "Multimodal Chain-of-Thought" (MCoT). First, it analyzes the background and theoretical motivations for its inception from the perspectives of technical evolution and task demands. Then, it introduces mainstream MCoT methods from three aspects: CoT paradigms, the post-training stage, and the inference stage, while also analyzing their underlying mechanisms. Furthermore, the paper summarizes existing evaluation benchmarks and metrics, and discusses the application scenarios of MCoT. Finally, it analyzes the challenges currently facing MCoT and provides an outlook on its future research directions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing complex reasoning capabilities in Multimodal Large Language Models
Addressing opaque reasoning paths and insufficient generalization in MLLMs
Extending Chain-of-Thought reasoning to multimodal domains systematically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends Chain-of-Thought reasoning to multimodal tasks
Systematically reviews Multimodal Chain-of-Thought methods
Analyzes mechanisms across training and inference stages
🔎 Similar Papers
No similar papers found.
W
Wenxin Zhu
Faculty of Computing, Harbin Institute of Technology, Harbin, China
A
Andong Chen
Faculty of Computing, Harbin Institute of Technology, Harbin, China
Y
Yuchen Song
Faculty of Computing, Harbin Institute of Technology, Harbin, China
Kehai Chen
Kehai Chen
Harbin Institute of Technolgy (Shenzhen)
LLMNatural Language ProcessingAgentMulti-model Generation
C
Conghui Zhu
Faculty of Computing, Harbin Institute of Technology, Harbin, China
Ziyan Chen
Ziyan Chen
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
Generative AILow Level Vision
T
Tiejun Zhao
Faculty of Computing, Harbin Institute of Technology, Harbin, China