Bridging Efficiency and Transparency: Explainable CoT Compression in Multimodal Large Reasoning Models

๐Ÿ“… 2026-02-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the redundancy in long chains of thought (Long CoT) within multimodal large language models, which impairs reasoning efficiency. Existing compression methods often disrupt critical visionโ€“language alignment and lack interpretability. To overcome these limitations, this study formulates CoT compression as a sequential decision-making process and employs reinforcement learning to optimize it, enabling the retention of essential reasoning steps while generating natural-language explanations. The proposed approach significantly shortens reasoning sequences across multiple multimodal reasoning benchmarks without compromising answer accuracy. Moreover, it provides high-quality, interpretable justifications for the compression decisions, thereby achieving efficient and transparent multimodal reasoning.

Technology Category

Application Category

๐Ÿ“ Abstract
Long chains of thought (Long CoTs) are widely employed in multimodal reasoning models to tackle complex tasks by capturing detailed visual information. However, these Long CoTs are often excessively lengthy and contain redundant reasoning steps, which can hinder inference efficiency. Compressing these long CoTs is a natural solution, yet existing approaches face two major challenges: (1) they may compromise the integrity of visual-textual reasoning by removing essential alignment cues, and (2) the compression process lacks explainability, making it difficult to discern which information is critical. To address these problems, we propose XMCC, an eXplainable Multimodal CoT Compressor that formulates compression as a sequential decision-making process optimized via reinforcement learning. XMCC can effectively shorten reasoning trajectories while preserving key reasoning steps and answer correctness, and simultaneously generates natural-language explanations for its compression decisions. Extensive experiments on representative multimodal reasoning benchmarks demonstrate that XMCC not only reduces reasoning length but also provides explainable explanations, validating its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Long Chain-of-Thought
Multimodal Reasoning
CoT Compression
Explainability
Redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Chain-of-Thought Compression
Multimodal Reasoning
Reinforcement Learning
Efficiency-Transparency Trade-off
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yizhi Wang
School of Computer Science and Engineering, Southeast University; Key Laboratory of Computer Network and Information Integration (SEU), Ministry of Education, China
Linan Yue
Linan Yue
Southeast University
Trustworthy AINatural Language Processing
Min-Ling Zhang
Min-Ling Zhang
Professor, School of Computer Science and Engineering, Southeast University, China
Artificial IntelligenceMachine LearningData Mining