🤖 AI Summary
To address the computational bottleneck in Vision Transformers (ViTs) caused by quadratic growth of token count with image resolution, this paper proposes a dual-path self-supervised framework. A low-resolution branch dynamically localizes salient regions, while a high-resolution branch extracts features exclusively within these selected regions—thereby avoiding full-image high-resolution computation. This work introduces the first joint self-supervised learning paradigm for *where* to compute (region selection) and *what* to compute (feature extraction), eliminating reliance on manual annotations or post-hoc token pruning. The method integrates self-supervised knowledge distillation, multi-scale ViT architectures, and a differentiable region selection mechanism. On Traffic Signs sparse recognition, it reduces FLOPs by 34× and accelerates inference by 6×. For ImageNet classification and ADE20K semantic segmentation, it improves accuracy while achieving 1.36× inference speedup—demonstrating strong trade-offs between efficiency and generalization.
📝 Abstract
Vision transformers are ever larger, more accurate, and more expensive to compute. The expense is even more extreme at high resolution as the number of tokens grows quadratically with the image size. We turn to adaptive computation to cope with this cost by learning to predict where to compute. Our LookWhere method divides the computation between a low-resolution selector and a high-resolution extractor without ever processing the full high-resolution input. We jointly pretrain the selector and extractor without task supervision by distillation from a self-supervised teacher, in effect, learning where and what to compute simultaneously. Unlike prior token reduction methods, which pay to save by pruning already-computed tokens, and prior token selection methods, which require complex and expensive per-task optimization, LookWhere economically and accurately selects and extracts transferrable representations of images. We show that LookWhere excels at sparse recognition on high-resolution inputs (Traffic Signs), maintaining accuracy while reducing FLOPs by up to 34x and time by 6x. It also excels at standard recognition tasks that are global (ImageNet classification) or local (ADE20K segmentation), improving accuracy while reducing time by 1.36x.