🤖 AI Summary
Pretrained vision-language-action (VLA) models often suffer significant performance degradation in real-world robot deployment due to distribution shift, while existing fine-tuning approaches demand extensive demonstration data and substantial computational resources—limiting practicality. This paper introduces VLA-Pilot: a plug-and-play, inference-time policy guidance framework that requires neither fine-tuning nor additional data. Its core is an embodied evolutionary diffusion mechanism, which, during inference, jointly optimizes action sequences via iterative evolutionary search and diffusion-based priors conditioned on visual–linguistic context, enabling closed-loop control. To our knowledge, VLA-Pilot is the first method to achieve zero-shot generalization across diverse manipulation tasks and heterogeneous robot morphologies. Evaluated on six real-world robotic manipulation tasks, it substantially improves success rates of pretrained VLAs, demonstrating strong robustness and adaptability both in-distribution and out-of-distribution.
📝 Abstract
Vision-Language-Action (VLA) models have demonstrated significant potential in real-world robotic manipulation. However, pre-trained VLA policies still suffer from substantial performance degradation during downstream deployment. Although fine-tuning can mitigate this issue, its reliance on costly demonstration collection and intensive computation makes it impractical in real-world settings. In this work, we introduce VLA-Pilot, a plug-and-play inference-time policy steering method for zero-shot deployment of pre-trained VLA without any additional fine-tuning or data collection. We evaluate VLA-Pilot on six real-world downstream manipulation tasks across two distinct robotic embodiments, encompassing both in-distribution and out-of-distribution scenarios. Experimental results demonstrate that VLA-Pilot substantially boosts the success rates of off-the-shelf pre-trained VLA policies, enabling robust zero-shot generalization to diverse tasks and embodiments. Experimental videos and code are available at: https://rip4kobe.github.io/vla-pilot/.