🤖 AI Summary
This work addresses the challenge of achieving high-precision contact-rich manipulation on robotic platforms lacking physical force sensors while preserving visual–language semantic integrity. The authors propose a Force Distillation Module (FDM) that learns from visual inputs and robot states to generate force representations aligned with real force signals, which are then injected into a pretrained vision–language model to enable sensorless force perception. Notably, the method distills high-quality force representations without requiring ground-truth force measurements, thereby reducing hardware dependency and enhancing multimodal alignment and task robustness. Real-world robotic experiments demonstrate that the approach outperforms both force-sensor-based methods and other baselines in contact-intensive tasks, validating the efficacy and superiority of the proposed force distillation mechanism.
📝 Abstract
Force sensing is a crucial modality for Vision-Language-Action (VLA) frameworks, as it enables fine-grained perception and dexterous manipulation in contact-rich tasks. We present Force-Distilled VLA (FD-VLA), a novel framework that integrates force awareness into contact-rich manipulation without relying on physical force sensors. The core of our approach is a Force Distillation Module (FDM), which distills force by mapping a learnable query token, conditioned on visual observations and robot states, into a predicted force token aligned with the latent representation of actual force signals. During inference, this distilled force token is injected into the pretrained VLM, enabling force-aware reasoning while preserving the integrity of its vision-language semantics. This design provides two key benefits: first, it allows practical deployment across a wide range of robots that lack expensive or fragile force-torque sensors, thereby reducing hardware cost and complexity; second, the FDM introduces an additional force-vision-state fusion prior to the VLM, which improves cross-modal alignment and enhances perception-action robustness in contact-rich scenarios. Surprisingly, our physical experiments show that the distilled force token outperforms direct sensor force measurements as well as other baselines, which highlights the effectiveness of this force-distilled VLA approach.