🤖 AI Summary
This work addresses the high computational overhead and low inference efficiency in vision-language models caused by redundant visual tokens. Unlike existing attention-based compression methods—which suffer from positional bias and poor compatibility with efficient attention kernels—this study proposes a novel compression framework that operates without relying on attention mechanisms. Drawing from an information-theoretic perspective, the method linearly approximates visual token reconstruction and prunes the least informative tokens based on approximation error, a criterion introduced here for the first time as a basis for compression. This approach seamlessly integrates with efficient attention kernels such as FlashAttention. Experiments show that it retains 95.2% of performance on image understanding tasks while compressing 88.9% of tokens, and even improves performance to 100.4% on video tasks with 87.5% token reduction, substantially enhancing inference efficiency.
📝 Abstract
Recent Vision-Language Models (VLMs) have demonstrated remarkable multimodal understanding capabilities, yet the redundant visual tokens incur prohibitive computational overhead and degrade inference efficiency. Prior studies typically relies on [CLS] attention or text-vision cross-attention to identify and discard redundant visual tokens. Despite promising results, such solutions are prone to introduce positional bias and, more critically, are incompatible with efficient attention kernels such as FlashAttention, limiting their practical deployment for VLM acceleration. In this paper, we step away from attention dependencies and revisit visual token compression from an information-theoretic perspective, aiming to maximally preserve visual information without any attention involvement. We present ApET, an Approximation-Error guided Token compression framework. ApET first reconstructs the original visual tokens with a small set of basis tokens via linear approximation, then leverages the approximation error to identify and drop the least informative tokens. Extensive experiments across multiple VLMs and benchmarks demonstrate that ApET retains 95.2% of the original performance on image-understanding tasks and even attains 100.4% on video-understanding tasks, while compressing the token budgets by 88.9% and 87.5%, respectively. Thanks to its attention-free design, ApET seamlessly integrates with FlashAttention, enabling further inference acceleration and making VLM deployment more practical. Code is available at https://github.com/MaQianKun0/ApET.