LookWhere? Efficient Visual Recognition by Learning Where to Look and What to See from Self-Supervision

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational bottleneck in Vision Transformers (ViTs) caused by quadratic growth of token count with image resolution, this paper proposes a dual-path self-supervised framework. A low-resolution branch dynamically localizes salient regions, while a high-resolution branch extracts features exclusively within these selected regions—thereby avoiding full-image high-resolution computation. This work introduces the first joint self-supervised learning paradigm for *where* to compute (region selection) and *what* to compute (feature extraction), eliminating reliance on manual annotations or post-hoc token pruning. The method integrates self-supervised knowledge distillation, multi-scale ViT architectures, and a differentiable region selection mechanism. On Traffic Signs sparse recognition, it reduces FLOPs by 34× and accelerates inference by 6×. For ImageNet classification and ADE20K semantic segmentation, it improves accuracy while achieving 1.36× inference speedup—demonstrating strong trade-offs between efficiency and generalization.

Technology Category

Application Category

📝 Abstract
Vision transformers are ever larger, more accurate, and more expensive to compute. The expense is even more extreme at high resolution as the number of tokens grows quadratically with the image size. We turn to adaptive computation to cope with this cost by learning to predict where to compute. Our LookWhere method divides the computation between a low-resolution selector and a high-resolution extractor without ever processing the full high-resolution input. We jointly pretrain the selector and extractor without task supervision by distillation from a self-supervised teacher, in effect, learning where and what to compute simultaneously. Unlike prior token reduction methods, which pay to save by pruning already-computed tokens, and prior token selection methods, which require complex and expensive per-task optimization, LookWhere economically and accurately selects and extracts transferrable representations of images. We show that LookWhere excels at sparse recognition on high-resolution inputs (Traffic Signs), maintaining accuracy while reducing FLOPs by up to 34x and time by 6x. It also excels at standard recognition tasks that are global (ImageNet classification) or local (ADE20K segmentation), improving accuracy while reducing time by 1.36x.
Problem

Research questions and friction points this paper is trying to address.

Efficient visual recognition via adaptive computation
Reducing computational cost in high-resolution vision transformers
Joint learning of where to compute and what to extract
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive computation with low-resolution selector
Joint pretraining via self-supervised distillation
Efficient token selection reducing FLOPs significantly
🔎 Similar Papers
No similar papers found.
A
A. Fuller
Carleton University
Yousef Yassin
Yousef Yassin
Carleton University
Junfeng Wen
Junfeng Wen
Assistant Professor, Carleton University
artificial intelligencemachine learning
D
Daniel G. Kyrollos
Carleton University
T
Tarek Ibrahim
Carleton University
J
James R. Green
Carleton University
Evan Shelhamer
Evan Shelhamer
UBC / Vector Institute / CIFAR AI Chair
computer visionmachine learningdeep learning