Neighbor-Aware Token Reduction via Hilbert Curve for Vision Transformers

📅 2025-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) suffer from low computational efficiency due to token redundancy, and existing token pruning or merging methods neglect 2D spatial locality, leading to loss of local contextual information. To address this, we propose a neighborhood-aware token reduction method based on Hilbert curve reordering—first achieving strict preservation of 2D neighborhood structure within a 1D sequence. Our approach applies Hilbert mapping for spatially coherent token reordering, introduces Neighborhood-Aware Pruning (NAP) and Mean Adjacent-Token cosine similarity fusion (MAT), and incorporates ViT feature reparameterization to ensure information integrity. Evaluated on ImageNet, our method achieves state-of-the-art accuracy–efficiency trade-offs: +1.8% Top-1 accuracy over TokenLearner with 37% inference speedup, significantly outperforming PatchMerging, DynamicViT, and other baselines.

Technology Category

Application Category

📝 Abstract
Vision Transformers (ViTs) have achieved remarkable success in visual recognition tasks, but redundant token representations limit their computational efficiency. Existing token merging and pruning strategies often overlook spatial continuity and neighbor relationships, resulting in the loss of local context. This paper proposes novel neighbor-aware token reduction methods based on Hilbert curve reordering, which explicitly preserves the neighbor structure in a 2D space using 1D sequential representations. Our method introduces two key strategies: Neighbor-Aware Pruning (NAP) for selective token retention and Merging by Adjacent Token similarity (MAT) for local token aggregation. Experiments demonstrate that our approach achieves state-of-the-art accuracy-efficiency trade-offs compared to existing methods. This work highlights the importance of spatial continuity and neighbor structure, offering new insights for the architectural optimization of ViTs.
Problem

Research questions and friction points this paper is trying to address.

Reduces redundant tokens in Vision Transformers for efficiency
Preserves spatial continuity and neighbor relationships in token reduction
Improves accuracy-efficiency trade-offs using Hilbert curve reordering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hilbert curve reordering for neighbor structure preservation
Neighbor-Aware Pruning for selective token retention
Merging by Adjacent Token similarity for local aggregation
🔎 Similar Papers
No similar papers found.
Y
Yunge Li
Department of Computer Science and Engineering, Oakland University, Rochester, MI 48309, USA
Lanyu Xu
Lanyu Xu
Oakland University
Edge ComputingEfficient AIConnected Health