🤖 AI Summary
This study addresses the challenge of enabling robots to perform cross-modal reasoning about object physical properties—such as hardness, coefficient of friction, and mass distribution—to improve manipulation safety and adaptability. We propose the first framework that embeds tactile signals into large-scale vision-language models (VLMs), introducing a hierarchical tactile–visual feature alignment mechanism and a fine-grained physical perception prompting strategy to achieve strong coupling between tactile semantics and the vision–language embedding space. Critically, our method requires no tactile annotations and supports zero-shot generalization. Experiments on 35 real-world objects demonstrate high correlation between predicted and ground-truth physical properties (average Pearson *r* > 0.89), significantly outperforming existing unimodal and multimodal baselines. Moreover, it improves performance by up to 23.6% on tasks involving grasp stability and interactive safety.
📝 Abstract
Inferring physical properties can significantly enhance robotic manipulation by enabling robots to handle objects safely and efficiently through adaptive grasping strategies. Previous approaches have typically relied on either tactile or visual data, limiting their ability to fully capture properties. We introduce a novel cross-modal perception framework that integrates visual observations with tactile representations within a multimodal vision-language model. Our physical reasoning framework, which employs a hierarchical feature alignment mechanism and a refined prompting strategy, enables our model to make property-specific predictions that strongly correlate with ground-truth measurements. Evaluated on 35 diverse objects, our approach outperforms existing baselines and demonstrates strong zero-shot generalization. Keywords: tactile perception, visual-tactile fusion, physical property inference, multimodal integration, robot perception