🤖 AI Summary
Current large vision-language models employ fixed-ratio visual token compression for high-resolution inputs, rendering them inflexible to scene complexity and prone to discarding semantically critical tokens—thereby degrading downstream performance. To address this, we propose GlimpsePrune, a dynamic visual token pruning framework inspired by human “glimpse”-based visual cognition: it performs data-driven, importance-aware token pruning early in a single forward pass, enabling adaptive compression. We further introduce GlimpsePrune+, an efficient fine-grained tuning method that enhances both training and inference efficiency. On free-form visual question answering (VQA), GlimpsePrune achieves 92.6% visual token pruning while preserving baseline accuracy. GlimpsePrune+ attains 110% of the baseline’s performance at a comparable pruning rate, demonstrating superior trade-offs between computational efficiency and task accuracy.
📝 Abstract
Visual token compression is critical for Large Vision-Language Models (LVLMs) to efficiently process high-resolution inputs. Existing methods that typically adopt fixed compression ratios cannot adapt to scenes of varying complexity, often causing imprecise pruning that discards informative visual tokens and results in degraded model performance. To address this issue, we introduce a dynamic pruning framework, GlimpsePrune, inspired by human cognition. It takes a data-driven ''glimpse'' and prunes irrelevant visual tokens in a single forward pass before answer generation. This approach prunes 92.6% of visual tokens while on average fully retaining the baseline performance on free-form VQA tasks. The reduced computational cost also enables more effective fine-tuning: an enhanced GlimpsePrune+ achieves 110% of the baseline performance while maintaining a similarly high pruning rate. Our work paves a new way for building more powerful and efficient LVLMs.