FD-VLA: Force-Distilled Vision-Language-Action Model for Contact-Rich Manipulation

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving high-precision contact-rich manipulation on robotic platforms lacking physical force sensors while preserving visual–language semantic integrity. The authors propose a Force Distillation Module (FDM) that learns from visual inputs and robot states to generate force representations aligned with real force signals, which are then injected into a pretrained vision–language model to enable sensorless force perception. Notably, the method distills high-quality force representations without requiring ground-truth force measurements, thereby reducing hardware dependency and enhancing multimodal alignment and task robustness. Real-world robotic experiments demonstrate that the approach outperforms both force-sensor-based methods and other baselines in contact-intensive tasks, validating the efficacy and superiority of the proposed force distillation mechanism.

Technology Category

Application Category

📝 Abstract
Force sensing is a crucial modality for Vision-Language-Action (VLA) frameworks, as it enables fine-grained perception and dexterous manipulation in contact-rich tasks. We present Force-Distilled VLA (FD-VLA), a novel framework that integrates force awareness into contact-rich manipulation without relying on physical force sensors. The core of our approach is a Force Distillation Module (FDM), which distills force by mapping a learnable query token, conditioned on visual observations and robot states, into a predicted force token aligned with the latent representation of actual force signals. During inference, this distilled force token is injected into the pretrained VLM, enabling force-aware reasoning while preserving the integrity of its vision-language semantics. This design provides two key benefits: first, it allows practical deployment across a wide range of robots that lack expensive or fragile force-torque sensors, thereby reducing hardware cost and complexity; second, the FDM introduces an additional force-vision-state fusion prior to the VLM, which improves cross-modal alignment and enhances perception-action robustness in contact-rich scenarios. Surprisingly, our physical experiments show that the distilled force token outperforms direct sensor force measurements as well as other baselines, which highlights the effectiveness of this force-distilled VLA approach.
Problem

Research questions and friction points this paper is trying to address.

force sensing
Vision-Language-Action
contact-rich manipulation
sensorless force perception
robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Force Distillation
Vision-Language-Action
Contact-Rich Manipulation
Sensor-Free Force Estimation
Multimodal Fusion
🔎 Similar Papers
No similar papers found.
R
Ruiteng Zhao
Advanced Robotics Centre, National University of Singapore; SIMTech, A*STAR
Wenshuo Wang
Wenshuo Wang
Professor, Beijing Institute of Technology (BIT) | Research Fellow, UC Berkeley, CMU, McGill
Human-Robot InteractionAutonomous DrivingBayesian LearningHuman Factors
Y
Yicheng Ma
School of Electrical & Electronic Engineering, Nanyang Technological University (NTU); SIMTech, A*STAR
X
Xiaocong Li
College of Information Science and Technology, Eastern Institute of Technology, Ningbo; John A. Paulson School of Engineering and Applied Sciences, Harvard University
F
Francis E. H. Tay
Advanced Robotics Centre, National University of Singapore
M
Marcelo H. Ang
Advanced Robotics Centre, National University of Singapore
H
Haiyue Zhu
Singapore Institute of Manufacturing Technologies, Agency for Science, Technology and Research (A*STAR)