N\"uwa: Mending the Spatial Integrity Torn by VLM Token Pruning

๐Ÿ“… 2026-02-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing visual-language models (VLMs) employ visual token pruning methods that perform well on visual question answering (VQA) tasks but suffer significant performance degradation on visual grounding due to the disruption of spatial structure. To address this, this work proposes a two-stage pruning framework: first, a swarm intelligence-inspired segregate-align-aggregate mechanism is introduced after the visual encoder to preserve global spatial anchors and maintain spatial reference relationships; second, a text-guided dynamic pruning strategy is applied within the large language model to focus on task-relevant visual information. This approach is the first to integrate swarm intelligence into VLM pruning, substantially enhancing spatial awareness while maintaining efficiency. It achieves state-of-the-art performance on multiple VQA benchmarks (improving from 94% to 95%) and yields a dramatic performance gain on visual grounding tasks (rising from 7% to 47%).

Technology Category

Application Category

๐Ÿ“ Abstract
Vision token pruning has proven to be an effective acceleration technique for the efficient Vision Language Model (VLM). However, existing pruning methods demonstrate excellent performance preservation in visual question answering (VQA) and suffer substantial degradation on visual grounding (VG) tasks. Our analysis of the VLM's processing pipeline reveals that strategies utilizing global semantic similarity and attention scores lose the global spatial reference frame, which is derived from the interactions of tokens'positional information. Motivated by these findings, we propose $\text{N\"uwa}$, a two-stage token pruning framework that enables efficient feature aggregation while maintaining spatial integrity. In the first stage, after the vision encoder, we apply three operations, namely separation, alignment, and aggregation, which are inspired by swarm intelligence algorithms to retain information-rich global spatial anchors. In the second stage, within the LLM, we perform text-guided pruning to retain task-relevant visual tokens. Extensive experiments demonstrate that $\text{N\"uwa}$ achieves SOTA performance on multiple VQA benchmarks (from 94% to 95%) and yields substantial improvements on visual grounding tasks (from 7% to 47%).
Problem

Research questions and friction points this paper is trying to address.

Vision Language Model
Token Pruning
Spatial Integrity
Visual Grounding
Visual Question Answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

token pruning
spatial integrity
vision-language model
visual grounding
swarm intelligence
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yihong Huang
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ); School of Artificial Intelligence, Xidian University
F
Fei Ma
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
Y
Yihua Shao
The Hong Kong Polytechnic University
Jingcai Guo
Jingcai Guo
Hong Kong Polytechnic University
Efficient AIZero-Shot LearningEdge AIMachine Learning
Zitong Yu
Zitong Yu
U.S. Food and Drug Administration
Medical imagingDeep learningMachine learningImage reconstruction
Laizhong Cui
Laizhong Cui
Shenzhen University
NetworkingEdge ComputingIoTBig Data๏ผŒMachine Learning
Q
Qi Tian
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ); Huawei