๐ค AI Summary
Existing visual-language models (VLMs) employ visual token pruning methods that perform well on visual question answering (VQA) tasks but suffer significant performance degradation on visual grounding due to the disruption of spatial structure. To address this, this work proposes a two-stage pruning framework: first, a swarm intelligence-inspired segregate-align-aggregate mechanism is introduced after the visual encoder to preserve global spatial anchors and maintain spatial reference relationships; second, a text-guided dynamic pruning strategy is applied within the large language model to focus on task-relevant visual information. This approach is the first to integrate swarm intelligence into VLM pruning, substantially enhancing spatial awareness while maintaining efficiency. It achieves state-of-the-art performance on multiple VQA benchmarks (improving from 94% to 95%) and yields a dramatic performance gain on visual grounding tasks (rising from 7% to 47%).
๐ Abstract
Vision token pruning has proven to be an effective acceleration technique for the efficient Vision Language Model (VLM). However, existing pruning methods demonstrate excellent performance preservation in visual question answering (VQA) and suffer substantial degradation on visual grounding (VG) tasks. Our analysis of the VLM's processing pipeline reveals that strategies utilizing global semantic similarity and attention scores lose the global spatial reference frame, which is derived from the interactions of tokens'positional information. Motivated by these findings, we propose $\text{N\"uwa}$, a two-stage token pruning framework that enables efficient feature aggregation while maintaining spatial integrity. In the first stage, after the vision encoder, we apply three operations, namely separation, alignment, and aggregation, which are inspired by swarm intelligence algorithms to retain information-rich global spatial anchors. In the second stage, within the LLM, we perform text-guided pruning to retain task-relevant visual tokens. Extensive experiments demonstrate that $\text{N\"uwa}$ achieves SOTA performance on multiple VQA benchmarks (from 94% to 95%) and yields substantial improvements on visual grounding tasks (from 7% to 47%).