HAND Me the Data: Fast Robot Adaptation via Hand Path Retrieval

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of rapidly adapting robotic manipulation skills, this paper proposes a hand-demonstration learning method that requires neither task annotations nor teleoperated robot demonstrations. Our approach introduces a novel hand-path-driven, two-stage cross-modal behavior retrieval mechanism: first, coarse filtering via visual tracking and appearance similarity; second, fine-grained retrieval of matching sub-trajectories from unlabeled autonomous exploration data based on temporal behavioral similarity. Crucially, the method operates without camera calibration or precise hand pose estimation, significantly lowering the human demonstration burden. Subsequently, only lightweight policy fine-tuning is required—averaging under four minutes per new task. Evaluated on a real robotic platform, our method achieves over a twofold improvement in task success rate compared to baseline approaches, demonstrating both high efficiency and practical applicability.

Technology Category

Application Category

📝 Abstract
We hand the community HAND, a simple and time-efficient method for teaching robots new manipulation tasks through human hand demonstrations. Instead of relying on task-specific robot demonstrations collected via teleoperation, HAND uses easy-to-provide hand demonstrations to retrieve relevant behaviors from task-agnostic robot play data. Using a visual tracking pipeline, HAND extracts the motion of the human hand from the hand demonstration and retrieves robot sub-trajectories in two stages: first filtering by visual similarity, then retrieving trajectories with similar behaviors to the hand. Fine-tuning a policy on the retrieved data enables real-time learning of tasks in under four minutes, without requiring calibrated cameras or detailed hand pose estimation. Experiments also show that HAND outperforms retrieval baselines by over 2x in average task success rates on real robots. Videos can be found at our project website: https://liralab.usc.edu/handretrieval/.
Problem

Research questions and friction points this paper is trying to address.

Teaching robots new tasks via hand demonstrations
Retrieving robot behaviors from hand motion data
Enabling fast robot adaptation in under four minutes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses hand demonstrations for robot learning
Retrieves behaviors from task-agnostic robot data
Enables real-time learning in under four minutes
🔎 Similar Papers
Matthew Hong
Matthew Hong
University of Southern California
Robot LearningReinforcement Learning
Anthony Liang
Anthony Liang
University of Southern California
Robot LearningReinforcement Learning
K
Kevin Kim
Thomas Lord Department of Computer Science, University of Southern California
H
Harshitha Rajaprakash
Thomas Lord Department of Computer Science, University of Southern California
Jesse Thomason
Jesse Thomason
Assistant Professor, University of Southern California
Natural Language ProcessingArtificial IntelligenceRobotics
E
Erdem Biyik
Thomas Lord Department of Computer Science, University of Southern California
J
Jesse Zhang
Thomas Lord Department of Computer Science, University of Southern California