Cross-Sensor Touch Generation

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Tactile representations lack cross-sensor generalizability due to strong dependence of existing models on specific hardware designs, hindering adaptation to diverse vision–tactile sensors. This paper proposes a unified tactile image generation framework enabling tactile signal translation and model transfer across arbitrary sensors. Our key contributions are: (1) Touch2Touch, an end-to-end paired learning method for supervised tactile domain translation; and (2) T2D2, an unsupervised intermediate deep representation approach that achieves cross-domain reconstruction via a shared latent space—eliminating the need for synchronized paired data. These methods respectively address scenarios with and without co-registered tactile recordings. We validate both approaches on hand pose estimation and behavior cloning tasks: transferred models achieve performance nearly matching that of source-sensor-trained baselines, demonstrating substantial improvements in generalization and deployment flexibility across heterogeneous tactile sensing platforms.

Technology Category

Application Category

📝 Abstract
Today's visuo-tactile sensors come in many shapes and sizes, making it challenging to develop general-purpose tactile representations. This is because most models are tied to a specific sensor design. To address this challenge, we propose two approaches to cross-sensor image generation. The first is an end-to-end method that leverages paired data (Touch2Touch). The second method builds an intermediate depth representation and does not require paired data (T2D2: Touch-to-Depth-to-Touch). Both methods enable the use of sensor-specific models across multiple sensors via the cross-sensor touch generation process. Together, these models offer flexible solutions for sensor translation, depending on data availability and application needs. We demonstrate their effectiveness on downstream tasks such as in-hand pose estimation and behavior cloning, successfully transferring models trained on one sensor to another. Project page: https://samantabelen.github.io/cross_sensor_touch_generation.
Problem

Research questions and friction points this paper is trying to address.

Developing general tactile representations across diverse sensor designs
Enabling sensor-specific models to work on multiple tactile sensors
Transferring trained models between different visuo-tactile sensors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-sensor image generation for tactile sensors
Two methods: Touch2Touch with paired data
T2D2 uses depth representation without paired data
🔎 Similar Papers
No similar papers found.