Feel the Force: Contact-Driven Learning from Humans

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robot fine-force control exhibits poor generalization in real-world scenarios, and existing simulation-based or robot-autonomous sampling methods suffer from limited transferability. Method: We propose a human-demonstration-driven framework for joint tactile-perception and execution learning: tactile gloves capture human hand contact forces, while vision estimates hand pose; these modalities are fused into a cross-modal action representation. A closed-loop force prediction policy is trained on this representation and transferred to a Franka Panda robot, integrated with PD-based force-tracking control for end-to-end force-sensitive manipulation. Contribution/Results: This work is the first to jointly model human tactile behavior and learn unified cross-embodiment representations—without requiring simulation or online robot data collection. Evaluated on five force-sensitive tasks, our approach achieves 77% success rate, significantly outperforming vision-only imitation learning. Results demonstrate the efficacy and generalization advantage of human tactile supervision for low-level force control modeling.

Technology Category

Application Category

📝 Abstract
Controlling fine-grained forces during manipulation remains a core challenge in robotics. While robot policies learned from robot-collected data or simulation show promise, they struggle to generalize across the diverse range of real-world interactions. Learning directly from humans offers a scalable solution, enabling demonstrators to perform skills in their natural embodiment and in everyday environments. However, visual demonstrations alone lack the information needed to infer precise contact forces. We present FeelTheForce (FTF): a robot learning system that models human tactile behavior to learn force-sensitive manipulation. Using a tactile glove to measure contact forces and a vision-based model to estimate hand pose, we train a closed-loop policy that continuously predicts the forces needed for manipulation. This policy is re-targeted to a Franka Panda robot with tactile gripper sensors using shared visual and action representations. At execution, a PD controller modulates gripper closure to track predicted forces-enabling precise, force-aware control. Our approach grounds robust low-level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks. Code and videos are available at https://feel-the-force-ftf.github.io.
Problem

Research questions and friction points this paper is trying to address.

Controlling fine-grained forces in robotic manipulation
Generalizing robot policies across real-world interactions
Inferring precise contact forces from human demonstrations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses tactile glove for contact force measurement
Trains closed-loop policy for force prediction
Applies PD controller for force-aware execution
🔎 Similar Papers
No similar papers found.