Collaborative Representation Learning for Alignment of Tactile, Language, and Vision Modalities

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of weak cross-device generalization due to tactile sensor non-standardization and insufficient intermediate-layer coordination among tactile, language, and vision modalities, this paper proposes a unified multimodal representation framework built upon the CLIP architecture. Our method introduces: (1) a sensor-perception modulator for cross-device tactile feature alignment; (2) a disentangled learning mechanism to isolate sensor-specific, task-agnostic interference; (3) a unified bridging adapter enabling fine-grained latent-space interaction across all three modalities; and (4) an RSS evaluation framework quantifying robustness, synergy, and stability. Extensive experiments demonstrate significant improvements in downstream tasks—including cross-sensor transfer, multimodal retrieval, and embodied reasoning—achieving superior generalization and cross-modal synergy compared to prior approaches.

Technology Category

Application Category

📝 Abstract
Tactile sensing offers rich and complementary information to vision and language, enabling robots to perceive fine-grained object properties. However, existing tactile sensors lack standardization, leading to redundant features that hinder cross-sensor generalization. Moreover, existing methods fail to fully integrate the intermediate communication among tactile, language, and vision modalities. To address this, we propose TLV-CoRe, a CLIP-based Tactile-Language-Vision Collaborative Representation learning method. TLV-CoRe introduces a Sensor-Aware Modulator to unify tactile features across different sensors and employs tactile-irrelevant decoupled learning to disentangle irrelevant tactile features. Additionally, a Unified Bridging Adapter is introduced to enhance tri-modal interaction within the shared representation space. To fairly evaluate the effectiveness of tactile models, we further propose the RSS evaluation framework, focusing on Robustness, Synergy, and Stability across different methods. Experimental results demonstrate that TLV-CoRe significantly improves sensor-agnostic representation learning and cross-modal alignment, offering a new direction for multimodal tactile representation.
Problem

Research questions and friction points this paper is trying to address.

Standardizing tactile sensor features to enable cross-sensor generalization
Integrating intermediate communication among tactile, language, and vision modalities
Developing robust evaluation framework for multimodal tactile representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sensor-Aware Modulator unifies tactile features across sensors
Tactile-irrelevant decoupled learning disentangles irrelevant tactile features
Unified Bridging Adapter enhances tri-modal interaction in shared space
🔎 Similar Papers
No similar papers found.
Yiyun Zhou
Yiyun Zhou
Zhejiang University
Data MiningMultimodal LearningLarge Language Model
M
Mingjing Xu
Swansea University
Jingwei Shi
Jingwei Shi
Shanghai University of Finance and Economics
Deep LearningLLMMLLMAgent
Q
Quanjiang Li
National University of Defense Technology
J
Jingyuan Chen
Zhejiang University