Viewpoint Matters: Dynamically Optimizing Viewpoints with Masked Autoencoder for Visual Manipulation

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited perceptual coverage and adaptability of fixed-view imitation learning in robotic visual manipulation. Inspired by human active perception, the authors propose MAE-Select, a framework that dynamically selects the most informative viewpoints within a single-camera system without requiring explicit viewpoint labels. This approach is the first to leverage a pretrained multi-view Masked Autoencoder (MAE) for unsupervised active viewpoint selection, combining its powerful representation capabilities with a dynamic optimization strategy to significantly enhance manipulation performance. Experimental results demonstrate that the proposed framework achieves or even surpasses the performance of multi-camera systems across multiple tasks, thereby validating its effectiveness and novelty.

Technology Category

Application Category

📝 Abstract
Robotic manipulation continues to be a challenge, and imitation learning (IL) enables robots to learn tasks from expert demonstrations. Current IL methods typically rely on fixed camera setups, where cameras are manually positioned in static locations, imposing significant limitations on adaptability and coverage. Inspired by human active perception, where humans dynamically adjust their viewpoint to capture the most relevant and least noisy information, we propose MAE-Select, a novel framework for active viewpoint selection in single-camera robotic systems. MAE-Select fully leverages pre-trained multi-view masked autoencoder representations and dynamically selects the next most informative viewpoint at each time chunk without requiring labeled viewpoints. Extensive experiments demonstrate that MAE-Select improves the capabilities of single-camera systems and, in some cases, even surpasses multi-camera setups. The project will be available at https://mae-select.github.io.
Problem

Research questions and friction points this paper is trying to address.

robotic manipulation
imitation learning
fixed camera setups
viewpoint selection
active perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

active viewpoint selection
masked autoencoder
imitation learning
robotic manipulation
dynamic perception
🔎 Similar Papers
No similar papers found.
P
Pengfei Yi
Institute of Automation, Chinese Academy of Sciences
Y
Yifan Han
Institute of Automation, Chinese Academy of Sciences
Junyan Li
Junyan Li
UMass Amherst
Foundation ModelsEfficient AI
L
Litao Liu
Rutgers University
Wenzhao Lian
Wenzhao Lian
Google X
Roboticsmachine learning