🤖 AI Summary
This work addresses the challenge of enabling robots to generalize from a single human demonstration video to novel repetitive tasks (e.g., bin-packing) without additional annotations or task-specific training. We propose a vision-based imitation learning framework centered on Slot-Net—a novel slot-level placement detection network that, for the first time, enables cross-video re-identification of manipulated objects and target slots while jointly modeling object-slot relational structure and regressing 6D relative pose. Our method integrates multimodal visual foundation models with a lightweight Slot-Net architecture. Evaluated on a real-world video benchmark, it significantly outperforms existing baselines and is successfully deployed on a physical robot to achieve end-to-end, vision-guided precise placement. This advances imitation learning by overcoming longstanding limitations in task generalization—particularly under minimal supervision and across unseen instances.
📝 Abstract
The majority of modern robot learning methods focus on learning a set of pre-defined tasks with limited or no generalization to new tasks. Extending the robot skillset to novel tasks involves gathering an extensive amount of training data for additional tasks. In this paper, we address the problem of teaching new tasks to robots using human demonstration videos for repetitive tasks (e.g., packing). This task requires understanding the human video to identify which object is being manipulated (the pick object) and where it is being placed (the placement slot). In addition, it needs to re-identify the pick object and the placement slots during inference along with the relative poses to enable robot execution of the task. To tackle this, we propose SLeRP, a modular system that leverages several advanced visual foundation models and a novel slot-level placement detector Slot-Net, eliminating the need for expensive video demonstrations for training. We evaluate our system using a new benchmark of real-world videos. The evaluation results show that SLeRP outperforms several baselines and can be deployed on a real robot.