Chain-of-Thought Compression Should Not Be Blind: V-Skip for Efficient Multimodal Reasoning via Dual-Path Anchoring

πŸ“… 2026-01-20
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high latency of chain-of-thought reasoning in multimodal large language models caused by autoregressive generation, as well as the visual information loss and hallucination induced by existing text-centric token compression methods. To this end, the authors propose V-Skip, a novel approach that introduces visual anchoring into token pruning criteria for the first time. They formulate a Visual Anchoring Information Bottleneck (VA-IB) framework augmented with a dual-path gating mechanism, which jointly evaluates linguistic surprisal and cross-modal attention flow during compression to preserve critical visual semantics. Evaluated on Qwen2-VL and Llama-3.2 series models, V-Skip achieves a 2.9Γ— inference speedup and improves performance on DocVQA by over 30%, with negligible accuracy degradation.

Technology Category

Application Category

πŸ“ Abstract
While Chain-of-Thought (CoT) reasoning significantly enhances the performance of Multimodal Large Language Models (MLLMs), its autoregressive nature incurs prohibitive latency constraints. Current efforts to mitigate this via token compression often fail by blindly applying text-centric metrics to multimodal contexts. We identify a critical failure mode termed Visual Amnesia, where linguistically redundant tokens are erroneously pruned, leading to hallucinations. To address this, we introduce V-Skip that reformulates token pruning as a Visual-Anchored Information Bottleneck (VA-IB) optimization problem. V-Skip employs a dual-path gating mechanism that weighs token importance through both linguistic surprisal and cross-modal attention flow, effectively rescuing visually salient anchors. Extensive experiments on Qwen2-VL and Llama-3.2 families demonstrate that V-Skip achieves a $2.9\times$ speedup with negligible accuracy loss. Specifically, it preserves fine-grained visual details, outperforming other baselines over 30\% on the DocVQA.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought
Multimodal Reasoning
Token Compression
Visual Amnesia
Latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Thought Compression
Visual Amnesia
Visual-Anchored Information Bottleneck
Dual-Path Gating
Multimodal Reasoning
πŸ”Ž Similar Papers
No similar papers found.