Tactile-VLA: Unlocking Vision-Language-Action Model's Physical Knowledge for Tactile Generalization

📅 2025-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of vision-language-action (VLA) models in fine-grained, force-controlled physical interactions—particularly in contact-rich scenarios. We propose the first framework that tightly integrates implicit physical semantics encoded in VLA models with real-time tactile sensing, enabling zero-shot tactile generalization and adaptive inference via few-shot demonstrations that activate the model’s prior understanding of contact mechanics. Our method unifies a VLA foundation model, high-resolution tactile sensors, a hybrid position-force controller, and a tactile-feedback-driven reasoning module into an end-to-end multimodal closed-loop control architecture. Experiments demonstrate significant improvements over baselines across three axes: tactile instruction following, tactile commonsense reasoning, and adaptive force control. The approach successfully executes diverse contact-intensive tasks in zero-shot settings, establishing a new paradigm for general-purpose robotic physical interaction.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have shown remarkable achievements, driven by the rich implicit knowledge of their vision-language components. However, achieving generalist robotic agents demands precise grounding into physical interactions, especially in contact-rich scenarios where fine-grained force control is essential. We advance VLAs' implicit knowledge beyond identifying what to do, towards guiding how to physically interact with real world. This paper introduces Tactile-VLA, a novel framework that deeply fuses vision, language, action, and tactile sensing. This framework incorporates a hybrid position-force controller to translate the model's intentions into precise physical actions and a reasoning module that allows the robot to adapt its strategy based on tactile feedback. Experiments demonstrate Tactile-VLA's effectiveness and generalizability in three key aspects: (1) enabling tactile-aware instruction following, (2) utilizing tactile-relevant commonsense, and (3) facilitating adaptive tactile-involved reasoning. A key finding is that the VLM's prior knowledge already contains semantic understanding of physical interaction; by connecting it to the robot's tactile sensors with only a few demonstrations, we can activate this prior knowledge to achieve zero-shot generalization in contact-rich tasks.
Problem

Research questions and friction points this paper is trying to address.

Bridging vision-language-action models to physical interactions
Enhancing tactile sensing for contact-rich robotic tasks
Activating prior knowledge for zero-shot tactile generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses vision, language, action, and tactile sensing
Uses hybrid position-force controller for precise actions
Adapts strategy via tactile feedback reasoning module
🔎 Similar Papers
No similar papers found.