Frequency-Aware Token Reduction for Efficient Vision Transformer

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) suffer from quadratic computational complexity in self-attention with respect to token sequence length, and existing token compression methods neglect the frequency-domain characteristics of tokens, leading to rank collapse and over-smoothing. To address this, we propose Frequency-Aware Token Compression (FATC), the first token compression framework grounded in frequency-domain analysis. FATC employs self-attention-driven spectral decomposition to separate tokens into high-frequency components (capturing fine details and edges) and low-frequency components (encoding semantics and background). It selectively preserves critical high-frequency tokens while aggregating low-frequency tokens into compact DC-like representations. A lightweight reconstruction module further refines compressed features. Evaluated on ImageNet classification, object detection, and semantic segmentation, FATC reduces FLOPs by 32% on average while improving Top-1 accuracy by +0.8%. It effectively mitigates rank collapse and over-smoothing, demonstrating the critical role of frequency-domain modeling in efficient ViT design.

Technology Category

Application Category

📝 Abstract
Vision Transformers have demonstrated exceptional performance across various computer vision tasks, yet their quadratic computational complexity concerning token length remains a significant challenge. To address this, token reduction methods have been widely explored. However, existing approaches often overlook the frequency characteristics of self-attention, such as rank collapsing and over-smoothing phenomenon. In this paper, we propose a frequency-aware token reduction strategy that improves computational efficiency while preserving performance by mitigating rank collapsing. Our method partitions tokens into high-frequency tokens and low-frequency tokens. high-frequency tokens are selectively preserved, while low-frequency tokens are aggregated into a compact direct current token to retain essential low-frequency components. Through extensive experiments and analysis, we demonstrate that our approach significantly improves accuracy while reducing computational overhead and mitigating rank collapsing and over smoothing. Furthermore, we analyze the previous methods, shedding light on their implicit frequency characteristics and limitations.
Problem

Research questions and friction points this paper is trying to address.

Addresses quadratic computational complexity in Vision Transformers
Mitigates rank collapsing and over-smoothing in self-attention
Reduces computational overhead while preserving model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frequency-aware token reduction strategy for Vision Transformers
Partition tokens into high-frequency and low-frequency categories
Preserve high-frequency tokens and aggregate low-frequency ones
🔎 Similar Papers
No similar papers found.