ViTacGen: Robotic Pushing with Vision-to-Touch Generation

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Tactile feedback is essential for robotic physical interaction, yet real tactile sensors suffer from high cost, fragility, calibration complexity, and inter-device variability; purely vision-based methods exhibit limited performance. Method: We propose the first reinforcement learning–oriented vision-to-tactile generation framework that synthesizes contact-depth images without physical tactile sensors. It employs an encoder-decoder architecture to model the vision-to-tactile mapping and incorporates contrastive learning to align multimodal visual and tactile features. Contribution/Results: The method enables zero-shot cross-device deployment, substantially alleviating hardware dependency and calibration challenges. Evaluated in both simulation and real-world robotic platforms, it achieves up to 86% task success rate—significantly outperforming vision-only baselines—and marks the first demonstration of efficient vision-tactile fusion control without real tactile feedback.

Technology Category

Application Category

📝 Abstract
Robotic pushing is a fundamental manipulation task that requires tactile feedback to capture subtle contact forces and dynamics between the end-effector and the object. However, real tactile sensors often face hardware limitations such as high costs and fragility, and deployment challenges involving calibration and variations between different sensors, while vision-only policies struggle with satisfactory performance. Inspired by humans' ability to infer tactile states from vision, we propose ViTacGen, a novel robot manipulation framework designed for visual robotic pushing with vision-to-touch generation in reinforcement learning to eliminate the reliance on high-resolution real tactile sensors, enabling effective zero-shot deployment on visual-only robotic systems. Specifically, ViTacGen consists of an encoder-decoder vision-to-touch generation network that generates contact depth images, a standardized tactile representation, directly from visual image sequence, followed by a reinforcement learning policy that fuses visual-tactile data with contrastive learning based on visual and generated tactile observations. We validate the effectiveness of our approach in both simulation and real world experiments, demonstrating its superior performance and achieving a success rate of up to 86%.
Problem

Research questions and friction points this paper is trying to address.

Eliminating reliance on real tactile sensors through vision-to-touch generation
Overcoming hardware limitations and deployment challenges of tactile sensors
Improving robotic pushing performance without physical tactile feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates contact depth images from vision
Uses reinforcement learning with fused visual-tactile data
Enables zero-shot deployment on vision-only systems
🔎 Similar Papers
No similar papers found.
Z
Zhiyuan Wu
Department of Engineering, King’s College London, Strand, London, WC2R 2LS, United Kingdom
Yijiong Lin
Yijiong Lin
University of Bristol
Robotic ManipulationPhysics SimulationSim-to-real RL Policy
Y
Yongqiang Zhao
Department of Engineering, King’s College London, Strand, London, WC2R 2LS, United Kingdom
Xuyang Zhang
Xuyang Zhang
King's College London
RoboticsTactile SensingRobot Manipulation
Z
Zhuo Chen
Department of Engineering, King’s College London, Strand, London, WC2R 2LS, United Kingdom
N
Nathan Lepora
Department of Engineering Mathematics and Bristol, Robotics Laboratory, University of Bristol, Bristol, BS8 1UB, United Kingdom
Shan Luo
Shan Luo
Reader (Associate Professor), King's College London
RoboticsRobot PerceptionTactile SensingComputer VisionMachine Learning