🤖 AI Summary
Existing vision-motion policies trained on expert demonstrations exhibit limited generalization under background or robot embodiment variations. This paper proposes ARRO: a calibration-free, zero-shot augmented reality visual representation method that dynamically identifies and masks distractors using open-vocabulary segmentation (e.g., SAM, GroundingDINO), then overlays virtual guidance signals to robustly reconstruct the observation space. Its core innovation is the first real-time visual masking framework requiring no fine-tuning or camera-to-robot calibration, enabling selective object masking and plug-and-play compatibility across diverse policies. Evaluated on simulated and real-world desktop manipulation tasks, ARRO significantly improves the robustness of mainstream policies—including Diffusion Policy, Octo, and OpenVLA—to background disturbances, segmentation inaccuracies, and embodiment discrepancies—without additional training data. Consequently, it reduces reliance on high-fidelity demonstration datasets while maintaining policy performance.
📝 Abstract
Visuomotor policies trained on human expert demonstrations have recently shown strong performance across a wide range of robotic manipulation tasks. However, these policies remain highly sensitive to domain shifts stemming from background or robot embodiment changes, which limits their generalization capabilities. In this paper, we present ARRO, a novel calibration-free visual representation that leverages zero-shot open-vocabulary segmentation and object detection models to efficiently mask out task-irrelevant regions of the scene without requiring additional training. By filtering visual distractors and overlaying virtual guides during both training and inference, ARRO improves robustness to scene variations and reduces the need for additional data collection. We extensively evaluate ARRO with Diffusion Policy on several tabletop manipulation tasks in both simulation and real-world environments, and further demonstrate its compatibility and effectiveness with generalist robot policies, such as Octo and OpenVLA. Across all settings in our evaluation, ARRO yields consistent performance gains, allows for selective masking to choose between different objects, and shows robustness even to challenging segmentation conditions. Videos showcasing our results are available at: augmented-reality-for-robots.github.io