SmoothVLA: Aligning Vision-Language-Action Models with Physical Constraints via Intrinsic Smoothness Optimization

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing task performance and motion smoothness in fine-tuning vision–language–action (VLA) models: supervised fine-tuning suffers from limited generalization, while reinforcement learning (RL) often yields jittery trajectories that violate physical constraints. To this end, we propose SmoothVLA, a novel framework that explicitly incorporates trajectory smoothness as an optimization prior in RL. We introduce a physics-informed intrinsic reward based on jerk (the derivative of acceleration), which guides the policy to generate physically plausible actions without requiring external feedback. By combining sparse task rewards with continuous smoothness rewards, our approach employs Group Relative Policy Optimization (GRPO) for end-to-end training. Evaluated on the LIBERO benchmark, SmoothVLA improves trajectory smoothness by 13.8% over standard RL and demonstrates significantly better multi-task generalization compared to supervised fine-tuning.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have emerged as a powerful paradigm for robotic manipulation. However, existing post-training methods face a dilemma between stability and exploration: Supervised Fine-Tuning (SFT) is constrained by demonstration quality and lacks generalization, whereas Reinforcement Learning (RL) improves exploration but often induces erratic, jittery trajectories that violate physical constraints. To bridge this gap, we propose SmoothVLA, a novel reinforcement learning fine-tuning framework that synergistically optimizes task performance and motion smoothness. The technical core is a physics-informed hybrid reward function that integrates binary sparse task rewards with a continuous dense term derived from trajectory jerk. Crucially, this reward is intrinsic, that computing directly from policy rollouts, without requiring extrinsic environment feedback or laborious reward engineering. Leveraging the Group Relative Policy Optimization (GRPO), SmoothVLA establishes trajectory smoothness as an explicit optimization prior, guiding the model toward physically feasible and stable control. Extensive experiments on the LIBERO benchmark demonstrate that SmoothVLA outperforms standard RL by 13.8\% in smoothness and significantly surpasses SFT in generalization across diverse tasks. Our work offers a scalable approach to aligning VLA models with physical-world constraints through intrinsic reward optimization.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
physical constraints
motion smoothness
reinforcement learning
robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action
Intrinsic Reward
Trajectory Smoothness
Reinforcement Learning Fine-tuning
Physical Constraints
🔎 Similar Papers
No similar papers found.
J
Jiashun Li
Chongqing University of Posts and Telecommunications; Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences
X
Xiaoyu Shi
Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences
Hong Xie
Hong Xie
University of Science and Technology of China (USTC)
Data Science/MiningOnline Learning
M
Mingsheng Shang
Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences
Y
Yun Lu
Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences