🤖 AI Summary
Tactile feedback is essential for robotic physical interaction, yet real tactile sensors suffer from high cost, fragility, calibration complexity, and inter-device variability; purely vision-based methods exhibit limited performance. Method: We propose the first reinforcement learning–oriented vision-to-tactile generation framework that synthesizes contact-depth images without physical tactile sensors. It employs an encoder-decoder architecture to model the vision-to-tactile mapping and incorporates contrastive learning to align multimodal visual and tactile features. Contribution/Results: The method enables zero-shot cross-device deployment, substantially alleviating hardware dependency and calibration challenges. Evaluated in both simulation and real-world robotic platforms, it achieves up to 86% task success rate—significantly outperforming vision-only baselines—and marks the first demonstration of efficient vision-tactile fusion control without real tactile feedback.
📝 Abstract
Robotic pushing is a fundamental manipulation task that requires tactile feedback to capture subtle contact forces and dynamics between the end-effector and the object. However, real tactile sensors often face hardware limitations such as high costs and fragility, and deployment challenges involving calibration and variations between different sensors, while vision-only policies struggle with satisfactory performance. Inspired by humans' ability to infer tactile states from vision, we propose ViTacGen, a novel robot manipulation framework designed for visual robotic pushing with vision-to-touch generation in reinforcement learning to eliminate the reliance on high-resolution real tactile sensors, enabling effective zero-shot deployment on visual-only robotic systems. Specifically, ViTacGen consists of an encoder-decoder vision-to-touch generation network that generates contact depth images, a standardized tactile representation, directly from visual image sequence, followed by a reinforcement learning policy that fuses visual-tactile data with contrastive learning based on visual and generated tactile observations. We validate the effectiveness of our approach in both simulation and real world experiments, demonstrating its superior performance and achieving a success rate of up to 86%.