🤖 AI Summary
This work addresses the limitations of existing robotic manipulation datasets, which suffer from insufficient diversity, scale, and quality, as well as a lack of multi-view consistency and temporal coherence. To overcome the imprecision of conventional text prompts in controlling scene layout, the authors propose a visual identity prompting mechanism that conditions diffusion models on exemplar images to generate multi-view consistent and temporally coherent manipulation videos. They further construct the first scalable visual identity data pool tailored for robotic manipulation, replacing text prompts with visual exemplars to enable precise control over generated scenes. Experiments demonstrate that visual-language action policies and visuomotor policies trained on this augmented data achieve significant performance improvements in both simulated and real-world environments.
📝 Abstract
The diversity, quantity, and quality of manipulation data are critical for training effective robot policies. However, due to hardware and physical setup constraints, collecting large-scale real-world manipulation data remains difficult to scale across diverse environments. Recent work uses text-prompt conditioned image diffusion models to augment manipulation data by altering the backgrounds and tabletop objects in the visual observations. However, these approaches often overlook the practical need for multi-view and temporally coherent observations required by state-of-the-art policy models. Further, text prompts alone cannot reliably specify the scene setup. To provide the diffusion model with explicit visual guidance, we introduce visual identity prompting, which supplies exemplar images as conditioning inputs to guide the generation of the desired scene setup. To this end, we also build a scalable pipeline to curate a visual identity pool from large robotics datasets. Using our augmented manipulation data to train downstream vision-language-action and visuomotor policy models yields consistent performance gains in both simulation and real-robot settings.