🤖 AI Summary
This study addresses the problem of tactile-driven visual material region localization—leveraging tactile signals to identify image regions sharing the same material properties. To this end, the authors propose a local cross-modal alignment mechanism that enables fine-grained alignment through dense visual-tactile feature interactions and generates tactile saliency maps for tactile-conditioned material segmentation. Key contributions include the construction of the first tactile-guided material segmentation dataset, the design of a material-diversity pairing strategy, and state-of-the-art performance on both established and newly introduced benchmarks, significantly outperforming existing vision-tactile methods in tactile localization accuracy.
📝 Abstract
We address the problem of tactile localization, where the goal is to identify image regions that share the same material properties as a tactile input. Existing visuo-tactile methods rely on global alignment and thus fail to capture the fine-grained local correspondences required for this task. The challenge is amplified by existing datasets, which predominantly contain close-up, low-diversity images. We propose a model that learns local visuo-tactile alignment via dense cross-modal feature interactions, producing tactile saliency maps for touch-conditioned material segmentation. To overcome dataset constraints, we introduce: (i) in-the-wild multi-material scene images that expand visual diversity, and (ii) a material-diversity pairing strategy that aligns each tactile sample with visually varied yet tactilely consistent images, improving contextual localization and robustness to weak signals. We also construct two new tactile-grounded material segmentation datasets for quantitative evaluation. Experiments on both new and existing benchmarks show that our approach substantially outperforms prior visuo-tactile methods in tactile localization.