Emulating Human-like Adaptive Vision for Efficient and Flexible Machine Visual Perception

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current machine vision models process entire images passively, leading to prohibitive computational overhead that scales with both input resolution and model size—severely limiting practicality and interpretability. To address this, we propose AdaptiveNN, a paradigm shift from passive perception to active, adaptive vision. Inspired by human visual attention, AdaptiveNN employs a coarse-to-fine sequential decision-making mechanism that dynamically focuses on task-relevant regions and accumulates visual evidence over time. Crucially, it pioneers the integration of representation learning with self-supervised reinforcement learning, enabling end-to-end training without differentiability constraints, external annotations, or auxiliary supervision. The framework supports dynamic inference termination and cross-task adaptation. Evaluated across 17 diverse benchmarks, AdaptiveNN reduces inference cost by up to 28× while preserving accuracy—demonstrating substantial gains in efficiency, flexibility, and interpretability.

Technology Category

Application Category

📝 Abstract
Human vision is highly adaptive, efficiently sampling intricate environments by sequentially fixating on task-relevant regions. In contrast, prevailing machine vision models passively process entire scenes at once, resulting in excessive resource demands scaling with spatial-temporal input resolution and model size, yielding critical limitations impeding both future advancements and real-world application. Here we introduce AdaptiveNN, a general framework aiming to drive a paradigm shift from 'passive' to 'active, adaptive' vision models. AdaptiveNN formulates visual perception as a coarse-to-fine sequential decision-making process, progressively identifying and attending to regions pertinent to the task, incrementally combining information across fixations, and actively concluding observation when sufficient. We establish a theory integrating representation learning with self-rewarding reinforcement learning, enabling end-to-end training of the non-differentiable AdaptiveNN without additional supervision on fixation locations. We assess AdaptiveNN on 17 benchmarks spanning 9 tasks, including large-scale visual recognition, fine-grained discrimination, visual search, processing images from real driving and medical scenarios, language-driven embodied AI, and side-by-side comparisons with humans. AdaptiveNN achieves up to 28x inference cost reduction without sacrificing accuracy, flexibly adapts to varying task demands and resource budgets without retraining, and provides enhanced interpretability via its fixation patterns, demonstrating a promising avenue toward efficient, flexible, and interpretable computer vision. Furthermore, AdaptiveNN exhibits closely human-like perceptual behaviors in many cases, revealing its potential as a valuable tool for investigating visual cognition. Code is available at https://github.com/LeapLabTHU/AdaptiveNN.
Problem

Research questions and friction points this paper is trying to address.

Emulating human adaptive vision for efficient machine perception
Reducing computational costs in visual tasks without accuracy loss
Enabling flexible and interpretable computer vision models
Innovation

Methods, ideas, or system contributions that make the work stand out.

AdaptiveNN framework for active vision
Coarse-to-fine sequential decision-making process
Self-rewarding reinforcement learning without supervision
🔎 Similar Papers
No similar papers found.
Yulin Wang
Yulin Wang
Shanghai Jiao Tong University
Y
Yang Yue
Learning And Perception (LEAP) Lab, Department of Automation, Tsinghua University
H
Huanqian Wang
Learning And Perception (LEAP) Lab, Department of Automation, Tsinghua University
H
Haojun Jiang
Learning And Perception (LEAP) Lab, Department of Automation, Tsinghua University
Yizeng Han
Yizeng Han
Alibaba DAMO Academy
Dynamic Neural NetworksEfficient Deep LearningComputer Vision
Zanlin Ni
Zanlin Ni
Tsinghua University
Computer VisionDeep Learning
Y
Yifan Pu
Learning And Perception (LEAP) Lab, Department of Automation, Tsinghua University
M
Minglei Shi
Learning And Perception (LEAP) Lab, Department of Automation, Tsinghua University
R
Rui Lu
Learning And Perception (LEAP) Lab, Department of Automation, Tsinghua University
Q
Qisen Yang
Learning And Perception (LEAP) Lab, Department of Automation, Tsinghua University
Andrew Zhao
Andrew Zhao
Tsinghua University
Reinforcement LearningLanguage AgentReasoning
Zhuofan Xia
Zhuofan Xia
PhD candidate, Tsinghua University
Efficient Deep LearningComputer VisionMultimodal Learning
Shiji Song
Shiji Song
Tsinghua University
Modeling and optimizationcomplex systemand stochastic systems
G
Gao Huang
Learning And Perception (LEAP) Lab, Department of Automation, Tsinghua University