Robotic Perception with a Large Tactile-Vision-Language Model for Physical Property Inference

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of enabling robots to perform cross-modal reasoning about object physical properties—such as hardness, coefficient of friction, and mass distribution—to improve manipulation safety and adaptability. We propose the first framework that embeds tactile signals into large-scale vision-language models (VLMs), introducing a hierarchical tactile–visual feature alignment mechanism and a fine-grained physical perception prompting strategy to achieve strong coupling between tactile semantics and the vision–language embedding space. Critically, our method requires no tactile annotations and supports zero-shot generalization. Experiments on 35 real-world objects demonstrate high correlation between predicted and ground-truth physical properties (average Pearson *r* > 0.89), significantly outperforming existing unimodal and multimodal baselines. Moreover, it improves performance by up to 23.6% on tasks involving grasp stability and interactive safety.

Technology Category

Application Category

📝 Abstract
Inferring physical properties can significantly enhance robotic manipulation by enabling robots to handle objects safely and efficiently through adaptive grasping strategies. Previous approaches have typically relied on either tactile or visual data, limiting their ability to fully capture properties. We introduce a novel cross-modal perception framework that integrates visual observations with tactile representations within a multimodal vision-language model. Our physical reasoning framework, which employs a hierarchical feature alignment mechanism and a refined prompting strategy, enables our model to make property-specific predictions that strongly correlate with ground-truth measurements. Evaluated on 35 diverse objects, our approach outperforms existing baselines and demonstrates strong zero-shot generalization. Keywords: tactile perception, visual-tactile fusion, physical property inference, multimodal integration, robot perception
Problem

Research questions and friction points this paper is trying to address.

Infer physical properties for safer robotic manipulation
Integrate visual and tactile data for better perception
Improve zero-shot generalization in property prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates visual and tactile data in multimodal model
Uses hierarchical feature alignment for accurate predictions
Demonstrates strong zero-shot generalization on objects
🔎 Similar Papers
No similar papers found.
Z
Zexiang Guo
College of Big Data and Internet, Shenzhen Technology University, China
H
Hengxiang Chen
College of Big Data and Internet, Shenzhen Technology University, China
X
Xinheng Mai
College of Big Data and Internet, Shenzhen Technology University, China
Q
Qiusang Qiu
College of Big Data and Internet, Shenzhen Technology University, China
G
Gan Ma
Sino-German College of Intelligent Manufacturing, Shenzhen Technology University, China
Zhanat Kappassov
Zhanat Kappassov
Nazarbayev University
TouchTactile SensingHaptic InterfacesRoboticsControl
Q
Qiang Li
College of Big Data and Internet, Shenzhen Technology University, China
Nutan Chen
Nutan Chen
Foundation Robotics
Machine learningRobotics