Revisiting [CLS] and Patch Token Interaction in Vision Transformers

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the feature learning conflict in Vision Transformers arising from the shared computational pathway for both the global [CLS] token and local patch tokens, which hinders performance in dense prediction tasks. The study reveals, for the first time, that normalization layers implicitly differentiate between these two token types. Building on this insight, the authors propose a lightweight token-specific processing mechanism that decouples their computational flows within the normalization layers and early QKV projections. This approach incurs no additional computational overhead and increases model parameters by only 8%, yet consistently improves performance by over 2 mIoU on standard segmentation benchmarks while preserving strong image classification accuracy.

Technology Category

Application Category

📝 Abstract
Vision Transformers have emerged as powerful, scalable and versatile representation learners. To capture both global and local features, a learnable [CLS] class token is typically prepended to the input sequence of patch tokens. Despite their distinct nature, both token types are processed identically throughout the model. In this work, we investigate the friction between global and local feature learning under different pre-training strategies by analyzing the interactions between class and patch tokens. Our analysis reveals that standard normalization layers introduce an implicit differentiation between these token types. Building on this insight, we propose specialized processing paths that selectively disentangle the computational flow of class and patch tokens, particularly within normalization layers and early query-key-value projections. This targeted specialization leads to significantly improved patch representation quality for dense prediction tasks. Our experiments demonstrate segmentation performance gains of over 2 mIoU points on standard benchmarks, while maintaining strong classification accuracy. The proposed modifications introduce only an 8% increase in parameters, with no additional computational overhead. Through comprehensive ablations, we provide insights into which architectural components benefit most from specialization and how our approach generalizes across model scales and learning frameworks.
Problem

Research questions and friction points this paper is trying to address.

Vision Transformers
[CLS] token
patch tokens
feature learning
dense prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Transformers
token specialization
normalization layers
dense prediction
class-patch disentanglement
🔎 Similar Papers
No similar papers found.