EVOLVE-VLA: Test-Time Training from Environment Feedback for Vision-Language-Action Models

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current Vision-Language-Action (VLA) models face three critical bottlenecks: reliance on hundreds of supervised fine-tuning samples, rigid and inflexible policy execution, and insufficient environmental adaptability during deployment. This paper introduces the first online test-time training framework for embodied intelligence enabling continual learning—eliminating dependence on task-specific demonstrations and achieving autonomous policy optimization solely through environmental interaction. Key contributions include: (1) a dense, self-supervised feedback mechanism powered by a learned progress estimator; and (2) a dual-strategy design combining cumulative smoothing estimation with progressive field-of-view expansion to effectively suppress noise and ensure stable policy evolution. Experiments demonstrate significant improvements: +8.6% success rate on long-horizon tasks, +22.0% single-shot learning performance, and 20.8% zero-demonstration cross-task generalization (baseline: 0%). The framework further exhibits emergent capabilities—including error recovery and novel strategy discovery.

Technology Category

Application Category

📝 Abstract
Achieving truly adaptive embodied intelligence requires agents that learn not just by imitating static demonstrations, but by continuously improving through environmental interaction, which is akin to how humans master skills through practice. Vision-Language-Action (VLA) models have advanced robotic manipulation by leveraging large language models, yet remain fundamentally limited by Supervised Finetuning (SFT): requiring hundreds of demonstrations per task, rigidly memorizing trajectories, and failing to adapt when deployment conditions deviate from training. We introduce EVOLVE-VLA, a test-time training framework enabling VLAs to continuously adapt through environment interaction with minimal or zero task-specific demonstrations. The key technical challenge is replacing oracle reward signals (unavailable at test time) with autonomous feedback. We address this through a learned progress estimator providing dense feedback, and critically, we design our framework to ``tame''this inherently noisy signal via two mechanisms: (1) an accumulative progress estimation mechanism smoothing noisy point-wise estimates, and (2) a progressive horizon extension strategy enabling gradual policy evolution. EVOLVE-VLA achieves substantial gains: +8.6% on long-horizon tasks, +22.0% in 1-shot learning, and enables cross-task generalization -- achieving 20.8% success on unseen tasks without task-specific demonstrations training (vs. 0% for pure SFT). Qualitative analysis reveals emergent capabilities absent in demonstrations, including error recovery and novel strategies. This work represents a critical step toward VLAs that truly learn and adapt, moving beyond static imitation toward continuous self-improvements.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models fail to adapt when deployment conditions differ from training
Current methods require extensive demonstrations per task and memorize rigid trajectories
Agents lack autonomous feedback mechanisms for continuous improvement through environmental interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-time training framework for continuous adaptation
Learned progress estimator replacing oracle reward signals
Noise-taming mechanisms for accumulative estimation and horizon extension
🔎 Similar Papers
No similar papers found.
Zechen Bai
Zechen Bai
National University of Singapore
MultimodalComputer VisionVirtual Reality
C
Chen Gao
Show Lab, National University of Singapore
M
Mike Zheng Shou
Show Lab, National University of Singapore