🤖 AI Summary
This work addresses the challenge of learning human-to-robot handover policies without real-world robot interaction data or online adaptation. The proposed end-to-end framework leverages only monocular RGB videos of human handovers. It first reconstructs dynamic 3D scenes from sparse-view human demonstration videos using Sparse-View Gaussian Splatting, jointly estimating camera poses and gripper trajectories to generate action-labeled synthetic demonstrations. A vision-based closed-loop controller is then trained on this simulated data and directly deployed on a physical robotic arm. Crucially, this is the first approach to learn handover policies purely from RGB inputs—requiring zero real robot data—by explicitly bridging the visual domain gap between simulation and reality via 3D scene reconstruction. Evaluated on 16 household objects, the method achieves high success rates and natural interaction quality in both simulation and real-world experiments.
📝 Abstract
Learning robot manipulation policies from raw, real-world image data requires a large number of robot-action trials in the physical environment. Although training using simulations offers a cost-effective alternative, the visual domain gap between simulation and robot workspace remains a major limitation. Gaussian Splatting visual reconstruction methods have recently provided new directions for robot manipulation by generating realistic environments. In this paper, we propose the first method for learning supervised-based robot handovers solely from RGB images without the need of real-robot training or real-robot data collection. The proposed policy learner, Human-to-Robot Handover using Sparse-View Gaussian Splatting (H2RH-SGS), leverages sparse-view Gaussian Splatting reconstruction of human-to-robot handover scenes to generate robot demonstrations containing image-action pairs captured with a camera mounted on the robot gripper. As a result, the simulated camera pose changes in the reconstructed scene can be directly translated into gripper pose changes. We train a robot policy on demonstrations collected with 16 household objects and {em directly} deploy this policy in the real environment. Experiments in both Gaussian Splatting reconstructed scene and real-world human-to-robot handover experiments demonstrate that H2RH-SGS serves as a new and effective representation for the human-to-robot handover task.