๐ค AI Summary
This work addresses the challenges of insufficient tactile perception and difficult sim-to-real transfer in in-hand object translation with dexterous hands. Methodologically, we propose a three-axis tactile-driven control framework enabling zero-shot sim-to-real transfer: (i) we develop the first physics-consistent tactile skin model capable of simulating 3D shear and normal forces; (ii) we design a deep reinforcement learning policyโbased on Proximal Policy Optimization (PPO)โthat fuses multi-dimensional tactile and proprioceptive sensing, augmented with sliding-contact modeling to enhance robustness during dynamic interactions. Contributions include: (i) achieving zero-shot sim-to-real transfer without real-world fine-tuning; (ii) experimentally validating stable in-hand translation on a physical dexterous hand across unseen objects and multiple object orientations; and (iii) demonstrating that the full three-axis tactile policy significantly outperforms unimodal baselines (shear-only, normal-only, or proprioception-only), establishing a generalizable and deployable paradigm for tactile dexterous manipulation.
๐ Abstract
Recent progress in reinforcement learning (RL) and tactile sensing has significantly advanced dexterous manipulation. However, these methods often utilize simplified tactile signals due to the gap between tactile simulation and the real world. We introduce a sensor model for tactile skin that enables zero-shot sim-to-real transfer of ternary shear and binary normal forces. Using this model, we develop an RL policy that leverages sliding contact for dexterous in-hand translation. We conduct extensive real-world experiments to assess how tactile sensing facilitates policy adaptation to various unseen object properties and robot hand orientations. We demonstrate that our 3-axis tactile policies consistently outperform baselines that use only shear forces, only normal forces, or only proprioception. Website: https://jessicayin.github.io/tactile-skin-rl/