Simpler Fast Vision Transformers with a Jumbo CLS Token

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing accuracy and inference speed in lightweight Vision Transformers (ViTs). We propose Jumbo, a novel architecture-agnostic method that introduces a scalable-width [CLS] token mechanism: the [CLS] token is dynamically split and fused to align with patch dimensions, coupled with a dedicated wide feed-forward network (FFN) to significantly enhance global representation learning. Crucially, Jumbo preserves the standard ViT’s simplicity—requiring no modifications to the self-attention mechanism and introducing zero additional parameters. On ImageNet-1K, it achieves +13.5% and +3.2% top-1 accuracy gains for ViT-nano and ViT-tiny, respectively; on ImageNet-21K, it improves by +3.4%. Jumbo outperforms state-of-the-art efficient ViT variants and demonstrates strong generalization to temporal modeling tasks.

Technology Category

Application Category

📝 Abstract
We introduce a simple enhancement to the global processing of vision transformers (ViTs) to improve accuracy while maintaining throughput. Our approach, Jumbo, creates a wider CLS token, which is split to match the patch token width before attention, processed with self-attention, and reassembled. After attention, Jumbo applies a dedicated, wider FFN to this token. Jumbo significantly improves over ViT+Registers on ImageNet-1K at high speeds (by 3.2% for ViT-tiny and 13.5% for ViT-nano); these Jumbo models even outperform specialized compute-efficient models while preserving the architectural advantages of plain ViTs. Although Jumbo sees no gains for ViT-small on ImageNet-1K, it gains 3.4% on ImageNet-21K over ViT+Registers. Both findings indicate that Jumbo is most helpful when the ViT is otherwise too narrow for the task. Finally, we show that Jumbo can be easily adapted to excel on data beyond images, e.g., time series.
Problem

Research questions and friction points this paper is trying to address.

Enhance vision transformer global processing
Improve accuracy while maintaining throughput
Adapt Jumbo for non-image data processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wider CLS token processing
Self-attention and reassembly technique
Dedicated wider FFN application
🔎 Similar Papers
No similar papers found.
A
A. Fuller
Carleton University, Ottawa, Canada
Yousef Yassin
Yousef Yassin
Carleton University
D
Daniel G. Kyrollos
Carleton University, Ottawa, Canada
Evan Shelhamer
Evan Shelhamer
UBC / Vector Institute / CIFAR AI Chair
computer visionmachine learningdeep learning
J
James R. Green
Carleton University, Ottawa, Canada