🤖 AI Summary
Vision Transformers (ViTs) suffer from low computational efficiency due to token redundancy, and existing token pruning or merging methods neglect 2D spatial locality, leading to loss of local contextual information. To address this, we propose a neighborhood-aware token reduction method based on Hilbert curve reordering—first achieving strict preservation of 2D neighborhood structure within a 1D sequence. Our approach applies Hilbert mapping for spatially coherent token reordering, introduces Neighborhood-Aware Pruning (NAP) and Mean Adjacent-Token cosine similarity fusion (MAT), and incorporates ViT feature reparameterization to ensure information integrity. Evaluated on ImageNet, our method achieves state-of-the-art accuracy–efficiency trade-offs: +1.8% Top-1 accuracy over TokenLearner with 37% inference speedup, significantly outperforming PatchMerging, DynamicViT, and other baselines.
📝 Abstract
Vision Transformers (ViTs) have achieved remarkable success in visual recognition tasks, but redundant token representations limit their computational efficiency. Existing token merging and pruning strategies often overlook spatial continuity and neighbor relationships, resulting in the loss of local context. This paper proposes novel neighbor-aware token reduction methods based on Hilbert curve reordering, which explicitly preserves the neighbor structure in a 2D space using 1D sequential representations. Our method introduces two key strategies: Neighbor-Aware Pruning (NAP) for selective token retention and Merging by Adjacent Token similarity (MAT) for local token aggregation. Experiments demonstrate that our approach achieves state-of-the-art accuracy-efficiency trade-offs compared to existing methods. This work highlights the importance of spatial continuity and neighbor structure, offering new insights for the architectural optimization of ViTs.