CanViT: Toward Active-Vision Foundation Models

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of scalable, general-purpose architectures and pretraining paradigms in active vision, which has hindered the development of Active Vision Foundation Models (AVFMs). We propose CanViT—the first task- and policy-agnostic AVFM—that aligns a retina-topology Vision Transformer with a scene-level latent workspace (canvas) via scene-relative RoPE and introduces a novel asymmetric Canvas Attention mechanism for efficient memory interaction. By decoupling inference and memory modules, CanViT enables low-latency, scalable sequential reasoning and integrates unlabeled pretraining with dense latent distillation from passive to active vision. Experiments show that CanViT achieves 38.5% mIoU on ADE20K with a single low-resolution fixation—surpassing the previous best active model by 27.6%—while reducing FLOPs by 19.5× without fine-tuning; with multiple fixations, it reaches 45.9% mIoU and attains 81.2% top-1 accuracy on ImageNet-1k classification.

Technology Category

Application Category

📝 Abstract
Active computer vision promises efficient, biologically plausible perception through sequential, localized glimpses, but lacks scalable general-purpose architectures and pretraining pipelines. As a result, Active-Vision Foundation Models (AVFMs) have remained unexplored. We introduce CanViT, the first task- and policy-agnostic AVFM. CanViT uses scene-relative RoPE to bind a retinotopic Vision Transformer backbone and a spatiotopic scene-wide latent workspace, the canvas. Efficient interaction with this high-capacity working memory is supported by Canvas Attention, a novel asymmetric cross-attention mechanism. We decouple thinking (backbone-level) and memory (canvas-level), eliminating canvas-side self-attention and fully-connected layers to achieve low-latency sequential inference and scalability to large scenes. We propose a label-free active vision pretraining scheme, policy-agnostic passive-to-active dense latent distillation: reconstructing scene-wide DINOv3 embeddings from sequences of low-resolution glimpses with randomized locations, zoom levels, and lengths. We pretrain CanViT-B from a random initialization on 13.2 million ImageNet-21k scenes -- an order of magnitude more than previous active models -- and 1 billion random glimpses, in 166 hours on a single H100. On ADE20K segmentation, a frozen CanViT-B achieves 38.5% mIoU in a single low-resolution glimpse, outperforming the best active model's 27.6% with 19.5x fewer inference FLOPs and no fine-tuning, as well as its FLOP- or input-matched DINOv3 teacher. Given additional glimpses, CanViT-B reaches 45.9% ADE20K mIoU. On ImageNet-1k classification, CanViT-B reaches 81.2% top-1 accuracy with frozen teacher probes. CanViT generalizes to longer rollouts, larger scenes, and new policies. Our work closes the wide gap between passive and active vision on semantic segmentation and demonstrates the potential of AVFMs as a new research axis.
Problem

Research questions and friction points this paper is trying to address.

active vision
foundation models
scalable architectures
pretraining pipelines
visual perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active Vision Foundation Model
Canvas Attention
scene-relative RoPE
label-free pretraining
retinotopic-to-spatiotopic binding
🔎 Similar Papers
Y
Yohaï-Eliel Berreby
McGill University, Mila - Quebec AI Institute
S
Sabrina Du
McGill University, Mila - Quebec AI Institute
Audrey Durand
Audrey Durand
Assistant Professor, Université Laval, Canada
banditsreinforcement learninghealth informatics
B
B. Suresh Krishna
McGill University