CRAFT: Adapting VLA Models to Contact-rich Manipulation via Force-aware Curriculum Fine-tuning

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have shown a strong capability in enabling robots to execute general instructions, yet they struggle with contact-rich manipulation tasks, where success requires precise alignment, stable contact maintenance, and effective handling of deformable objects. A fundamental challenge arises from the imbalance between high-entropy vision and language inputs and low-entropy but critical force signals, which often leads to over-reliance on perception and unstable control. To address this, we introduce CRAFT, a force-aware curriculum fine-tuning framework that integrates a variational information bottleneck module to regulate vision and language embeddings during early training. This curriculum strategy encourages the model to prioritize force signals initially, before progressively restoring access to the full multimodal information. To enable force-aware learning, we further design a homologous leader-follower teleoperation system that collects synchronized vision, language, and force data across diverse contact-rich tasks. Real-world experiments demonstrate that CRAFT consistently improves task success, generalizes to unseen objects and novel task variations, and adapts effectively across diverse VLA architectures, enabling robust and generalizable contact-rich manipulation.
Problem

Research questions and friction points this paper is trying to address.

contact-rich manipulation
Vision-Language-Action models
force signals
multimodal imbalance
robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

force-aware learning
curriculum fine-tuning
contact-rich manipulation
variational information bottleneck
vision-language-action models
Y
Yike Zhang
National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
Y
Yaonan Wang
National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
X
Xinxin Sun
National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
K
Kaizhen Huang
National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
Zhiyuan Xu
Zhiyuan Xu
X-Humanoid, Midea Group, Syracuse University
Deep Reinforcement learningDeep LearningCommunication Networking
J
Junjie Ji
Beijing Innovation Center of Humanoid Robotics, Beijing, 102600, China
Zhengping Che
Zhengping Che
X-Humanoid
Embodied AIDeep Learning
J
Jian Tang
Beijing Innovation Center of Humanoid Robotics, Beijing, 102600, China
J
Jingtao Sun
Department of Electrical and Computer Engineering, National University of Singapore, 119077, Singapore