🤖 AI Summary
Existing Interleaved Chain-of-Thought (ICoT) approaches suffer from inefficient reasoning and semantic inconsistency due to static visual insertion and incoherent multimodal representations. To address these limitations, this work proposes the DaP-ICoT framework, which introduces a novel dynamic, on-demand visual integration mechanism coupled with a precise visual grounding strategy. This enables adaptive selection of when to inject visual tokens during reasoning, thereby generating contextually aligned and semantically coherent multimodal representations. Experimental results demonstrate that DaP-ICoT achieves state-of-the-art performance across multiple benchmarks and model architectures, significantly reducing image query frequency and cutting token consumption by 72.6%, thus substantially improving both reasoning efficiency and representational consistency.
📝 Abstract
Recently, Interleaved-modal Chain-of-Thought (ICoT) reasoning has achieved remarkable success by leveraging both multimodal inputs and outputs, attracting increasing attention. While achieving promising performance, current ICoT methods still suffer from two major limitations: (1) Static Visual Thought Positioning, which statically inserts visual information at fixed steps, resulting in inefficient and inflexible reasoning; and (2) Broken Visual Thought Representation, which involves discontinuous and semantically incoherent visual tokens. To address these limitations, we introduce Interleaved-modal Chain-of-Thought reasoning with Dynamic and Precise Visual Thoughts (DaP-ICoT), which incorporates two key components: (1) Dynamic Visual Thought Integration adaptively introduces visual inputs based on reasoning needs, reducing redundancy and improving efficiency. (2) Precise Visual Thought Guidance ensures visual semantically coherent and contextually aligned representations. Experiments across multiple benchmarks and models demonstrate that DaP-ICoT achieves state-of-the-art performance. In addition, DaP-ICoT significantly reduces the number of inserted images, leading to a 72.6% decrease in token consumption, enabling more efficient ICoT reasoning.