🤖 AI Summary
This work addresses the limited generalization of existing CNN-based tactile perception methods on novel visuo-tactile sensors, which typically require extensive sensor-specific data and retraining, hindering rapid deployment on multi-fingered dexterous hands. To overcome this, the paper introduces TacViT—the first framework to adapt Vision Transformers for tactile perception—leveraging global self-attention mechanisms to extract robust features from tactile images and enabling accurate inference of contact properties on unseen sensors. Experimental results on a five-fingered dexterous hand demonstrate that TacViT significantly outperforms conventional CNN approaches, substantially reducing the need for sensor-specific training data and retraining while enhancing the scalability and practicality of tactile sensing systems.
📝 Abstract
Rapid deployment of new tactile sensors is essential for scalable robotic manipulation, especially in multi-fingered hands equipped with vision-based tactile sensors. However, current methods for inferring contact properties rely heavily on convolutional neural networks (CNNs), which, while effective on known sensors, require large, sensor-specific datasets. Furthermore, they require retraining for each new sensor due to differences in lens properties, illumination, and sensor wear. Here we introduce TacViT, a novel tactile perception model based on Vision Transformers, designed to generalize on new sensor data. TacViT leverages global self-attention mechanisms to extract robust features from tactile images, enabling accurate contact property inference even on previously unseen sensors. This capability significantly reduces the need for data collection and retraining, accelerating the deployment of new sensors. We evaluate TacViT on sensors for a five-fingered robot hand and demonstrate its superior generalization performance compared to CNNs. Our results highlight TacViTs potential to make tactile sensing more scalable and practical for real-world robotic applications.