๐ค AI Summary
Existing visionโtactile fusion approaches often rely on simplistic concatenation strategies, which struggle in occluded scenarios and fail to exploit the complementary nature and alignment potential between modalities. To address these limitations, this work proposes ViTaS, a novel framework that explicitly models the alignment and complementary characteristics of visual and tactile signals through soft-fusion contrastive learning and a conditional variational autoencoder (CVAE). By moving beyond conventional concatenation-based fusion, ViTaS achieves more robust and semantically coherent multimodal representations. Extensive evaluation across 12 simulated environments and 3 real-world manipulation tasks demonstrates that ViTaS consistently outperforms current baselines, exhibiting superior robustness and generalization capabilities under challenging conditions.
๐ Abstract
Tactile information plays a crucial role in human manipulation tasks and has recently garnered increasing attention in robotic manipulation. However, existing approaches mostly focus on the alignment of visual and tactile features and the integration mechanism tends to be direct concatenation. Consequently, they struggle to effectively cope with occluded scenarios due to neglecting the inherent complementary nature of both modalities and the alignment may not be exploited enough, limiting the potential of their real-world deployment. In this paper, we present ViTaS, a simple yet effective framework that incorporates both visual and tactile information to guide the behavior of an agent. We introduce Soft Fusion Contrastive Learning, an advanced version of conventional contrastive learning method and a CVAE module to utilize the alignment and complementarity within visuo-tactile representations. We demonstrate the effectiveness of our method in 12 simulated and 3 real-world environments, and our experiments show that ViTaS significantly outperforms existing baselines. Project page: https://skyrainwind.github.io/ViTaS/index.html.