🤖 AI Summary
Current robotic vision policies exhibit limited generalization under observational variations such as camera viewpoint shifts, lighting changes, and distractor objects. To address this, we propose a dual-auxiliary-task co-training framework: (1) state similarity learning to enforce cross-viewpoint and cross-illumination invariant representations, and (2) observation perturbation invariance regularization to enhance robustness against visual disturbances. Crucially, this is the first work to systematically integrate real-robot demonstration data with high-fidelity synthetic images rendered in Unreal Engine for multi-source training—without requiring physics-based simulation. The approach significantly improves invariance to static observational variations. Evaluated on unseen viewpoints, lighting conditions, and distractor configurations, our method achieves an average 18% improvement in task success rate over baseline methods, outperforming existing generative data augmentation approaches.
📝 Abstract
Reasoning from diverse observations is a fundamental capability for generalist robot policies to operate in a wide range of environments. Despite recent advancements, many large-scale robotic policies still remain sensitive to key sources of observational variation such as changes in camera perspective, lighting, and the presence of distractor objects. We posit that the limited generalizability of these models arises from the substantial diversity required to robustly cover these quasistatic axes, coupled with the current scarcity of large-scale robotic datasets that exhibit rich variation across them. In this work, we propose to systematically examine what robots need to generalize across these challenging axes by introducing two key auxiliary tasks, state similarity and invariance to observational perturbations, applied to both demonstration data and static visual data. We then show that via these auxiliary tasks, leveraging both more-expensive robotic demonstration data and less-expensive, visually rich synthetic images generated from non-physics-based simulation (for example, Unreal Engine) can lead to substantial increases in generalization to unseen camera viewpoints, lighting configurations, and distractor conditions. Our results demonstrate that co-training on this diverse data improves performance by 18 percent over existing generative augmentation methods. For more information and videos, please visit https://invariance-cotraining.github.io