π€ AI Summary
This work addresses the high latency of chain-of-thought reasoning in multimodal large language models caused by autoregressive generation, as well as the visual information loss and hallucination induced by existing text-centric token compression methods. To this end, the authors propose V-Skip, a novel approach that introduces visual anchoring into token pruning criteria for the first time. They formulate a Visual Anchoring Information Bottleneck (VA-IB) framework augmented with a dual-path gating mechanism, which jointly evaluates linguistic surprisal and cross-modal attention flow during compression to preserve critical visual semantics. Evaluated on Qwen2-VL and Llama-3.2 series models, V-Skip achieves a 2.9Γ inference speedup and improves performance on DocVQA by over 30%, with negligible accuracy degradation.
π Abstract
While Chain-of-Thought (CoT) reasoning significantly enhances the performance of Multimodal Large Language Models (MLLMs), its autoregressive nature incurs prohibitive latency constraints. Current efforts to mitigate this via token compression often fail by blindly applying text-centric metrics to multimodal contexts. We identify a critical failure mode termed Visual Amnesia, where linguistically redundant tokens are erroneously pruned, leading to hallucinations. To address this, we introduce V-Skip that reformulates token pruning as a Visual-Anchored Information Bottleneck (VA-IB) optimization problem. V-Skip employs a dual-path gating mechanism that weighs token importance through both linguistic surprisal and cross-modal attention flow, effectively rescuing visually salient anchors. Extensive experiments on Qwen2-VL and Llama-3.2 families demonstrate that V-Skip achieves a $2.9\times$ speedup with negligible accuracy loss. Specifically, it preserves fine-grained visual details, outperforming other baselines over 30\% on the DocVQA.