Vision-TTT: Efficient and Expressive Visual Representation Learning with Test-Time Training

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing efficiency and representational capacity in Vision Transformers, whose self-attention mechanism incurs quadratic computational complexity. The authors introduce ViTTT, a novel backbone architecture that, for the first time, adapts the linear-complexity Test-Time Training (TTT) framework to vision tasks. ViTTT employs self-supervised learning to compress visual token sequences and integrates a bidirectional scanning strategy with Conv2d modules to efficiently capture global image dependencies. On ImageNet, ViTTT achieves Top-1 accuracy ranging from 77.3% to 82.5%. Notably, ViTTT-T at 1280×1280 resolution reduces FLOPs by 79.4%, accelerates inference by 4.38×, and decreases memory usage by 88.9% compared to DeiT-T, while significantly outperforming existing efficient models on downstream tasks.

Technology Category

Application Category

📝 Abstract
Learning efficient and expressive visual representation has long been the pursuit of computer vision research. While Vision Transformers (ViTs) gradually replace traditional Convolutional Neural Networks (CNNs) as more scalable vision learners, their applications are plagued by the quadratic complexity of the self-attention mechanism. To address the challenge, we introduce a new linear-time sequence modeling method Test-Time Training (TTT) into vision and propose Vision-TTT, which compresses the visual token sequence in a novel self-supervised learning manner. By incorporating bidirectional scan strategy and the Conv2d module, Vision-TTT effectively extends vanilla TTT to model 2D visual correlations with global receptive fields. Extensive experiments show that \texttt{Vittt-T/S/B} achieve 77.3%,81.2%,82.5% Top-1 accuracy on ImageNet classification and also greatly outperform their counterparts on downstream tasks. At 1280x1280 resolution, \texttt{Vittt-T} reduces FLOPs by 79.4% and runs 4.38x faster with 88.9% less memory than DeiT-T. These results demonstrate the expressiveness and efficiency of Vision-TTT as a strong candidate for the next-generation generic visual backbone.
Problem

Research questions and friction points this paper is trying to address.

visual representation learning
Vision Transformers
self-attention complexity
efficient vision models
quadratic complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-TTT
Test-Time Training
linear-time attention
visual representation learning
efficient vision models
🔎 Similar Papers
No similar papers found.
Q
Quan Kong
Zhejiang University, Hangzhou, China
Y
Yanru Xiao
Amazon Web Services, USA
Y
Yuhao Shen
Zhejiang University, Hangzhou, China
Cong Wang
Cong Wang
Zhejiang University
LLM Safety/Efficiency