DexTac: Learning Contact-aware Visuotactile Policies via Hand-by-hand Teaching

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing dexterous manipulation approaches struggle to generate effective perception-action policies in contact-intensive tasks due to insufficient tactile information. This work proposes a novel method that directly learns high-dimensional tactile priors—including contact force distributions and spatial contact regions—from human hand demonstrations. By integrating kinesthetic teaching, multidimensional tactile sensing, and deep reinforcement learning, the approach constructs a visuo-tactile fused policy network capable of fine-grained selection and maintenance of contact regions. Evaluated on a single-handed insertion task, the method achieves a success rate of 91.67%, representing a 31.67% improvement over a force-only baseline, thereby significantly enhancing the dexterous hand’s capability in complex physical interactions.

Technology Category

Application Category

📝 Abstract
For contact-intensive tasks, the ability to generate policies that produce comprehensive tactile-aware motions is essential. However, existing data collection and skill learning systems for dexterous manipulation often suffer from low-dimensional tactile information. To address this limitation, we propose DexTac, a visuo-tactile manipulation learning framework based on kinesthetic teaching. DexTac captures multi-dimensional tactile data-including contact force distributions and spatial contact regions-directly from human demonstrations. By integrating these rich tactile modalities into a policy network, the resulting contact-aware agent enables a dexterous hand to autonomously select and maintain optimal contact regions during complex interactions. We evaluate our framework on a challenging unimanual injection task. Experimental results demonstrate that DexTac achieves a 91.67% success rate. Notably, in high-precision scenarios involving small-scale syringes, our approach outperforms force-only baselines by 31.67%. These results underscore that learning multi-dimensional tactile priors from human demonstrations is critical for achieving robust, human-like dexterous manipulation in contact-rich environments.
Problem

Research questions and friction points this paper is trying to address.

dexterous manipulation
tactile perception
contact-rich tasks
visuo-tactile policies
kinesthetic teaching
Innovation

Methods, ideas, or system contributions that make the work stand out.

visuo-tactile learning
dexterous manipulation
kinesthetic teaching
multi-dimensional tactile sensing
contact-aware policy
🔎 Similar Papers
No similar papers found.
Xingyu Zhang
Xingyu Zhang
Horizon Robotics Inc
NLP&VLM&AD
Chaofan Zhang
Chaofan Zhang
Institute of Automation, Chinese Academy of Sciences
tactile perception and robots dexterous manipulation
B
Boyue Zhang
Institute of Automation, Chinese Academy of Sciences
Z
Zhinan Peng
University of Electronic Science and Technology of China
S
Shaowei Cui
Institute of Automation, Chinese Academy of Sciences
Shuo Wang
Shuo Wang
Institute of Automation, Chinese Academy of Sciences
RoboticsIntelligent RobotBiomimetic RobotMulti-Robot Systems