🤖 AI Summary
This work addresses the limited perceptual coverage and adaptability of fixed-view imitation learning in robotic visual manipulation. Inspired by human active perception, the authors propose MAE-Select, a framework that dynamically selects the most informative viewpoints within a single-camera system without requiring explicit viewpoint labels. This approach is the first to leverage a pretrained multi-view Masked Autoencoder (MAE) for unsupervised active viewpoint selection, combining its powerful representation capabilities with a dynamic optimization strategy to significantly enhance manipulation performance. Experimental results demonstrate that the proposed framework achieves or even surpasses the performance of multi-camera systems across multiple tasks, thereby validating its effectiveness and novelty.
📝 Abstract
Robotic manipulation continues to be a challenge, and imitation learning (IL) enables robots to learn tasks from expert demonstrations. Current IL methods typically rely on fixed camera setups, where cameras are manually positioned in static locations, imposing significant limitations on adaptability and coverage. Inspired by human active perception, where humans dynamically adjust their viewpoint to capture the most relevant and least noisy information, we propose MAE-Select, a novel framework for active viewpoint selection in single-camera robotic systems. MAE-Select fully leverages pre-trained multi-view masked autoencoder representations and dynamically selects the next most informative viewpoint at each time chunk without requiring labeled viewpoints. Extensive experiments demonstrate that MAE-Select improves the capabilities of single-camera systems and, in some cases, even surpasses multi-camera setups. The project will be available at https://mae-select.github.io.