🤖 AI Summary
This work addresses the limited generalization of robotic imitation learning caused by the narrow viewpoint of expert demonstrations and the high cost of multi-environment data collection. The authors propose a practical framework that requires no additional human effort, leveraging synchronized multi-camera recordings of a single expert trajectory to generate multi-view pseudo-demonstrations that enrich the training distribution. Central to the approach are a camera-space action representation and a multi-view action aggregation mechanism, which jointly enhance the viewpoint invariance of visual representations. Requiring only minimal hardware modifications, the method substantially outperforms single-view baselines in both simulation and real-world manipulation tasks, achieving significantly improved data efficiency and cross-scenario generalization.
📝 Abstract
The generalization ability of imitation learning policies for robotic manipulation is fundamentally constrained by the diversity of expert demonstrations, while collecting demonstrations across varied environments is costly and difficult in practice. In this paper, we propose a practical framework that exploits inherent scene diversity without additional human effort by scaling camera views during demonstration collection. Instead of acquiring more trajectories, multiple synchronized camera perspectives are used to generate pseudo-demonstrations from each expert trajectory, which enriches the training distribution and improves viewpoint invariance in visual representations. We analyze how different action spaces interact with view scaling and show that camera-space representations further enhance diversity. In addition, we introduce a multiview action aggregation method that allows single-view policies to benefit from multiple cameras during deployment. Extensive experiments in simulation and real-world manipulation tasks demonstrate significant gains in data efficiency and generalization compared to single-view baselines. Our results suggest that scaling camera views provides a practical and scalable solution for imitation learning, which requires minimal additional hardware setup and integrates seamlessly with existing imitation learning algorithms. The website of our project is https://yichen928.github.io/robot_multiview.