🤖 AI Summary
To address the challenges of weak cross-device generalization due to tactile sensor non-standardization and insufficient intermediate-layer coordination among tactile, language, and vision modalities, this paper proposes a unified multimodal representation framework built upon the CLIP architecture. Our method introduces: (1) a sensor-perception modulator for cross-device tactile feature alignment; (2) a disentangled learning mechanism to isolate sensor-specific, task-agnostic interference; (3) a unified bridging adapter enabling fine-grained latent-space interaction across all three modalities; and (4) an RSS evaluation framework quantifying robustness, synergy, and stability. Extensive experiments demonstrate significant improvements in downstream tasks—including cross-sensor transfer, multimodal retrieval, and embodied reasoning—achieving superior generalization and cross-modal synergy compared to prior approaches.
📝 Abstract
Tactile sensing offers rich and complementary information to vision and language, enabling robots to perceive fine-grained object properties. However, existing tactile sensors lack standardization, leading to redundant features that hinder cross-sensor generalization. Moreover, existing methods fail to fully integrate the intermediate communication among tactile, language, and vision modalities. To address this, we propose TLV-CoRe, a CLIP-based Tactile-Language-Vision Collaborative Representation learning method. TLV-CoRe introduces a Sensor-Aware Modulator to unify tactile features across different sensors and employs tactile-irrelevant decoupled learning to disentangle irrelevant tactile features. Additionally, a Unified Bridging Adapter is introduced to enhance tri-modal interaction within the shared representation space. To fairly evaluate the effectiveness of tactile models, we further propose the RSS evaluation framework, focusing on Robustness, Synergy, and Stability across different methods. Experimental results demonstrate that TLV-CoRe significantly improves sensor-agnostic representation learning and cross-modal alignment, offering a new direction for multimodal tactile representation.