🤖 AI Summary
This work addresses the limited zero-shot adaptation capability of pretrained robotic models in real-world scenarios and their reliance on extensive in-situ data. To overcome these challenges, the authors propose a trajectory generation method based on visual prompt editing. This approach introduces visual prompting into robotic manipulation for the first time, leveraging a conditional injection module to contextually edit existing trajectories. It enables texture transfer and moderate shape adaptation without requiring additional data collection, thereby facilitating cross-scenario transfer. Experimental results demonstrate that the method significantly enhances policy generalization and task execution reliability across diverse simulated and real-world environments, improving consistency in cross-scenario task performance.
📝 Abstract
Modern robots can perform a wide range of simple tasks and adapt to diverse scenarios in the well-trained environment. However, deploying pre-trained robot models in real-world user scenarios remains challenging due to their limited zero-shot capabilities, often necessitating extensive on-site data collection. To address this issue, we propose Robotic Scene Cloning (RSC), a novel method designed for scene-specific adaptation by editing existing robot operation trajectories. RSC achieves accurate and scene-consistent sample generation by leveraging a visual prompting mechanism and a carefully tuned condition injection module. Not only transferring textures but also performing moderate shape adaptations in response to the visual prompts, RSC demonstrates reliable task performance across a variety of object types. Experiments across various simulated and real-world environments demonstrate that RSC significantly enhances policy generalization in target environments.