🤖 AI Summary
Variations in design and fabrication across high-resolution optical tactile sensors cause inconsistent tactile signal distributions, severely hindering cross-sensor transferability of models and knowledge.
Method: We propose Sensor-Independent Tactile Representation (SITR), the first framework enabling zero-shot cross-sensor representation transfer without real-data fine-tuning. Our approach employs a Transformer-based self-supervised learning architecture pretrained on diverse synthetic optical tactile datasets, explicitly disentangling sensor-specific artifacts from task-relevant semantic features.
Contribution/Results: Evaluated on multiple real-world tactile tasks—including material classification and object pose estimation—SITR significantly enhances model generalization and data reuse efficiency. On unseen sensors, it achieves an average accuracy improvement of 12.7% over baselines. By eliminating sensor-specific calibration and enabling plug-and-play deployment, SITR establishes a new paradigm for standardized, scalable tactile perception.
📝 Abstract
High-resolution tactile sensors have become critical for embodied perception and robotic manipulation. However, a key challenge in the field is the lack of transferability between sensors due to design and manufacturing variations, which result in significant differences in tactile signals. This limitation hinders the ability to transfer models or knowledge learned from one sensor to another. To address this, we introduce a novel method for extracting Sensor-Invariant Tactile Representations (SITR), enabling zero-shot transfer across optical tactile sensors. Our approach utilizes a transformer-based architecture trained on a diverse dataset of simulated sensor designs, allowing it to generalize to new sensors in the real world with minimal calibration. Experimental results demonstrate the method's effectiveness across various tactile sensing applications, facilitating data and model transferability for future advancements in the field.