🤖 AI Summary
Existing robotic systems exhibit limited performance in fine-grained, contact-intensive manipulation tasks, primarily due to the ineffective utilization of tactile feedback. This work proposes TouchGuide, a cross-modal fusion approach that leverages tactile guidance during inference to refine a pre-trained visuomotor policy: it first generates a coarse action from visual input and then refines it using a task-specific Contact Physics Model (CPM). TouchGuide is the first method to integrate visual and tactile information within a low-dimensional action space, constructing the CPM via contrastive learning and combining it with diffusion or flow-matching policies alongside TacUMI, a novel cost-effective tactile sensing system. Evaluated on five challenging tasks—including shoelace tying and chip insertion—TouchGuide significantly outperforms existing visuo-tactile methods, demonstrating strong effectiveness and generalization capability.
📝 Abstract
Fine-grained and contact-rich manipulation remain challenging for robots, largely due to the underutilization of tactile feedback. To address this, we introduce TouchGuide, a novel cross-policy visuo-tactile fusion paradigm that fuses modalities within a low-dimensional action space. Specifically, TouchGuide operates in two stages to guide a pre-trained diffusion or flow-matching visuomotor policy at inference time. First, the policy produces a coarse, visually-plausible action using only visual inputs during early sampling. Second, a task-specific Contact Physical Model (CPM) provides tactile guidance to steer and refine the action, ensuring it aligns with realistic physical contact conditions. Trained through contrastive learning on limited expert demonstrations, the CPM provides a tactile-informed feasibility score to steer the sampling process toward refined actions that satisfy physical contact constraints. Furthermore, to facilitate TouchGuide training with high-quality and cost-effective data, we introduce TacUMI, a data collection system. TacUMI achieves a favorable trade-off between precision and affordability; by leveraging rigid fingertips, it obtains direct tactile feedback, thereby enabling the collection of reliable tactile data. Extensive experiments on five challenging contact-rich tasks, such as shoe lacing and chip handover, show that TouchGuide consistently and significantly outperforms state-of-the-art visuo-tactile policies.