TaSA: Two-Phased Deep Predictive Learning of Tactile Sensory Attenuation for Improving In-Grasp Manipulation

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in robotic dexterous manipulation where tactile signals from self-contact are difficult to distinguish from those caused by external object contact, often leading to task failure or damage. To resolve this, the authors propose a two-stage deep predictive learning framework: first modeling the tactile dynamics induced by the robot’s own movements to capture self-contact patterns, and then integrating this model into a reinforcement learning policy to enhance sensitivity to external contacts. Inspired by human sensory attenuation mechanisms, this study is the first to incorporate such a biological principle into robotic tactile perception through structured modeling of self-contact, enabling effective discrimination of external interactions. Evaluated on fine manipulation tasks—including lead insertion, coin insertion, and paperclip gripping—the method significantly outperforms existing baselines, demonstrating both efficacy and robustness.

Technology Category

Application Category

📝 Abstract
Humans can achieve diverse in-hand manipulations, such as object pinching and tool use, which often involve simultaneous contact between the object and multiple fingers. This is still an open issue for robotic hands because such dexterous manipulation requires distinguishing between tactile sensations generated by their self-contact and those arising from external contact. Otherwise, object/robot breakage happens due to contacts/collisions. Indeed, most approaches ignore self-contact altogether, by constraining motion to avoid/ignore self-tactile information during contact. While this reduces complexity, it also limits generalization to real-world scenarios where self-contact is inevitable. Humans overcome this challenge through self-touch perception, using predictive mechanisms that anticipate the tactile consequences of their own motion, through a principle called sensory attenuation, where the nervous system differentiates predictable self-touch signals, allowing novel object stimuli to stand out as relevant. Deriving from this, we introduce TaSA, a two-phased deep predictive learning framework. In the first phase, TaSA explicitly learns self-touch dynamics, modeling how a robot's own actions generate tactile feedback. In the second phase, this learned model is incorporated into the motion learning phase, to emphasize object contact signals during manipulation. We evaluate TaSA on a set of insertion tasks, which demand fine tactile discrimination: inserting a pencil lead into a mechanical pencil, inserting coins into a slot, and fixing a paper clip onto a sheet of paper, with various orientations, positions, and sizes. Across all tasks, policies trained with TaSA achieve significantly higher success rates than baseline methods, demonstrating that structured tactile perception with self-touch based on sensory attenuation is critical for dexterous robotic manipulation.
Problem

Research questions and friction points this paper is trying to address.

tactile sensory attenuation
in-grasp manipulation
self-contact
dexterous robotic manipulation
tactile perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

tactile sensory attenuation
self-touch perception
predictive learning
in-grasp manipulation
dexterous robotic hands
🔎 Similar Papers
No similar papers found.