RoboVIP: Multi-View Video Generation with Visual Identity Prompting Augments Robot Manipulation

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing robotic manipulation datasets, which suffer from insufficient diversity, scale, and quality, as well as a lack of multi-view consistency and temporal coherence. To overcome the imprecision of conventional text prompts in controlling scene layout, the authors propose a visual identity prompting mechanism that conditions diffusion models on exemplar images to generate multi-view consistent and temporally coherent manipulation videos. They further construct the first scalable visual identity data pool tailored for robotic manipulation, replacing text prompts with visual exemplars to enable precise control over generated scenes. Experiments demonstrate that visual-language action policies and visuomotor policies trained on this augmented data achieve significant performance improvements in both simulated and real-world environments.

Technology Category

Application Category

📝 Abstract
The diversity, quantity, and quality of manipulation data are critical for training effective robot policies. However, due to hardware and physical setup constraints, collecting large-scale real-world manipulation data remains difficult to scale across diverse environments. Recent work uses text-prompt conditioned image diffusion models to augment manipulation data by altering the backgrounds and tabletop objects in the visual observations. However, these approaches often overlook the practical need for multi-view and temporally coherent observations required by state-of-the-art policy models. Further, text prompts alone cannot reliably specify the scene setup. To provide the diffusion model with explicit visual guidance, we introduce visual identity prompting, which supplies exemplar images as conditioning inputs to guide the generation of the desired scene setup. To this end, we also build a scalable pipeline to curate a visual identity pool from large robotics datasets. Using our augmented manipulation data to train downstream vision-language-action and visuomotor policy models yields consistent performance gains in both simulation and real-robot settings.
Problem

Research questions and friction points this paper is trying to address.

multi-view video generation
visual identity prompting
robot manipulation
temporal coherence
scene setup specification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Identity Prompting
Multi-View Video Generation
Diffusion Models
Robot Manipulation Data Augmentation
Temporally Coherent Observations
🔎 Similar Papers
No similar papers found.
B
Boyang Wang
Shanghai AI Laboratory
H
Haoran Zhang
University of Michigan
S
Shujie Zhang
Shanghai AI Laboratory
J
Jinkun Hao
Shanghai AI Laboratory
M
Mingda Jia
Shanghai AI Laboratory
Q
Qi Lv
Shanghai AI Laboratory
Yucheng Mao
Yucheng Mao
UC San Diego
3D Computer Vision
Zhaoyang Lyu
Zhaoyang Lyu
PhD of Information Engineering, The Chinese University of Hong Kong
machine learning
Jia Zeng
Jia Zeng
Shanghai AI Laboratory
Embodied AIRobotic ManipulationVision-Language-Action
X
Xudong Xu
Shanghai AI Laboratory
J
Jiangmiao Pang
Shanghai AI Laboratory