🤖 AI Summary
Multifinger dexterous hands struggle to achieve high-degree-of-freedom coordinated motion switching under blind conditions—i.e., without visual feedback—especially when responding to subtle tactile variations.
Method: This paper proposes a real-time blind grasp pose adjustment strategy integrating tactile constraints and regional attention mechanisms. We design a tactile state constraint loss and a dynamic modality attention gating mechanism, building an AE-LSTM hybrid architecture: an autoencoder compresses whole-hand tactile signals, while an LSTM models the temporal tactile–action mapping; additionally, region-wise tactile state encoding enables subtask-driven motion transitions.
Contribution/Results: To our knowledge, this is the first method enabling blind, dynamic motion-mode transfer based on whole-hand regional tactile perception. Evaluated on a physical robot platform across diverse bottle-cap opening tasks, it achieves state-of-the-art success rates. The model autonomously discriminates subtasks (e.g., sliding vs. unscrewing) and adaptively focuses on task-critical tactile sensor modalities.
📝 Abstract
To achieve a desired grasping posture (including object position and orientation), multi-finger motions need to be conducted according to the the current touch state. Specifically, when subtle changes happen during correcting the object state, not only proprioception but also tactile information from the entire hand can be beneficial. However, switching motions with high-DOFs of multiple fingers and abundant tactile information is still challenging. In this study, we propose a loss function with constraints of touch states and an attention mechanism for focusing on important modalities depending on the touch states. The policy model is AE-LSTM which consists of Autoencoder (AE) which compresses abundant tactile information and Long Short-Term Memory (LSTM) which switches the motion depending on the touch states. Motion for cap-opening was chosen as a target task which consists of subtasks of sliding an object and opening its cap. As a result, the proposed method achieved the best success rates with a variety of objects for real time cap-opening manipulation. Furthermore, we could confirm that the proposed model acquired the features of each subtask and attention on specific modalities.