🤖 AI Summary
This work addresses the challenge of balancing task performance and motion smoothness in fine-tuning vision–language–action (VLA) models: supervised fine-tuning suffers from limited generalization, while reinforcement learning (RL) often yields jittery trajectories that violate physical constraints. To this end, we propose SmoothVLA, a novel framework that explicitly incorporates trajectory smoothness as an optimization prior in RL. We introduce a physics-informed intrinsic reward based on jerk (the derivative of acceleration), which guides the policy to generate physically plausible actions without requiring external feedback. By combining sparse task rewards with continuous smoothness rewards, our approach employs Group Relative Policy Optimization (GRPO) for end-to-end training. Evaluated on the LIBERO benchmark, SmoothVLA improves trajectory smoothness by 13.8% over standard RL and demonstrates significantly better multi-task generalization compared to supervised fine-tuning.
📝 Abstract
Vision-Language-Action (VLA) models have emerged as a powerful paradigm for robotic manipulation. However, existing post-training methods face a dilemma between stability and exploration: Supervised Fine-Tuning (SFT) is constrained by demonstration quality and lacks generalization, whereas Reinforcement Learning (RL) improves exploration but often induces erratic, jittery trajectories that violate physical constraints. To bridge this gap, we propose SmoothVLA, a novel reinforcement learning fine-tuning framework that synergistically optimizes task performance and motion smoothness. The technical core is a physics-informed hybrid reward function that integrates binary sparse task rewards with a continuous dense term derived from trajectory jerk. Crucially, this reward is intrinsic, that computing directly from policy rollouts, without requiring extrinsic environment feedback or laborious reward engineering. Leveraging the Group Relative Policy Optimization (GRPO), SmoothVLA establishes trajectory smoothness as an explicit optimization prior, guiding the model toward physically feasible and stable control. Extensive experiments on the LIBERO benchmark demonstrate that SmoothVLA outperforms standard RL by 13.8\% in smoothness and significantly surpasses SFT in generalization across diverse tasks. Our work offers a scalable approach to aligning VLA models with physical-world constraints through intrinsic reward optimization.