TactAlign: Human-to-Robot Policy Transfer via Tactile Alignment

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of transferring tactile signals between humans and robots, which is hindered by sensor discrepancies and morphological differences. To this end, the authors propose TactAlign, a novel method that achieves unsupervised cross-embodiment tactile alignment without requiring paired data, manual labels, or privileged information. TactAlign leverages pseudo-paired tactile observations generated during hand-object interactions and employs a rectified flow to map human and robotic tactile inputs into a shared latent space, effectively integrating imitation learning with unsupervised representation learning. Evaluated on contact-rich tasks such as insertion and cap rotation, TactAlign significantly improves policy transfer performance, generalizes to unseen objects and tasks, and enables zero-shot human-to-robot policy transfer in highly dexterous operations like lightbulb screwing, thereby substantially enhancing the generalization and scalability of human-robot tactile policy transfer.

Technology Category

Application Category

📝 Abstract
Human demonstrations collected by wearable devices (e.g., tactile gloves) provide fast and dexterous supervision for policy learning, and are guided by rich, natural tactile feedback. However, a key challenge is how to transfer human-collected tactile signals to robots despite the differences in sensing modalities and embodiment. Existing human-to-robot (H2R) approaches that incorporate touch often assume identical tactile sensors, require paired data, and involve little to no embodiment gap between human demonstrator and the robots, limiting scalability and generality. We propose TactAlign, a cross-embodiment tactile alignment method that transfers human-collected tactile signals to a robot with different embodiment. TactAlign transforms human and robot tactile observations into a shared latent representation using a rectified flow, without paired datasets, manual labels, or privileged information. Our method enables low-cost latent transport guided by hand-object interaction-derived pseudo-pairs. We demonstrate that TactAlign improves H2R policy transfer across multiple contact-rich tasks (pivoting, insertion, lid closing), generalizes to unseen objects and tasks with human data (less than 5 minutes), and enables zero-shot H2R transfer on a highly dexterous tasks (light bulb screwing).
Problem

Research questions and friction points this paper is trying to address.

human-to-robot transfer
tactile alignment
embodiment gap
cross-embodiment
policy transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

tactile alignment
human-to-robot transfer
cross-embodiment
rectified flow
zero-shot policy transfer
🔎 Similar Papers
No similar papers found.