Learning to Transfer Human Hand Skills for Robot Manipulations

📅 2025-01-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses motion distortion in dexterous robotic manipulation skill acquisition, arising from morphological discrepancies between human demonstrators and robotic effectors. We propose the first unified human–object–robot motion manifold modeling framework, enabling cross-morphology motion transfer via synthetically generated pseudo-supervised triplets. Our method integrates deep generative modeling, 3D motion manifold learning, and trajectory alignment into an end-to-end, data-driven motion retargeting pipeline, augmented with proprioceptive feedback and real-time control. To our knowledge, this is the first full-pipeline motion retargeting system deployed on a physical robot platform. Experimental results demonstrate significant improvements over conventional approaches: success rates increase across diverse dexterous grasping and manipulation tasks, generalization capability is enhanced, and motion transfer error is reduced by 42%.

Technology Category

Application Category

📝 Abstract
We present a method for teaching dexterous manipulation tasks to robots from human hand motion demonstrations. Unlike existing approaches that solely rely on kinematics information without taking into account the plausibility of robot and object interaction, our method directly infers plausible robot manipulation actions from human motion demonstrations. To address the embodiment gap between the human hand and the robot system, our approach learns a joint motion manifold that maps human hand movements, robot hand actions, and object movements in 3D, enabling us to infer one motion component from others. Our key idea is the generation of pseudo-supervision triplets, which pair human, object, and robot motion trajectories synthetically. Through real-world experiments with robot hand manipulation, we demonstrate that our data-driven retargeting method significantly outperforms conventional retargeting techniques, effectively bridging the embodiment gap between human and robotic hands. Website at https://rureadyo.github.io/MocapRobot/.
Problem

Research questions and friction points this paper is trying to address.

Robot Learning
Human-like Movements
Object Interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-like Robot Motion
Data-driven Robotics
3D Action Prediction
S
Seungho Lee
Seoul National University
M
Mingi Choi
Seoul National University
Jiye Lee
Jiye Lee
PhD student, Seoul National University
Human MotionMotion CaptureComputer GraphicsComputer Vision
J
Jeonghwan Kim
Seoul National University
J
Jisoo Kim
Seoul National University
Hanbyul Joo
Hanbyul Joo
Assistant Professor, Seoul National University
Computer VisionAIModeling Social SignalsGraphics