🤖 AI Summary
This work addresses the limited out-of-distribution (OOD) generalization of current vision-language-action (VLA) models, which is primarily hindered by the scarcity of real-world robot data. The authors propose a hierarchical VLA framework that, for the first time, integrates a large-scale pretrained world model as a high-level planner to generate embodied visual subgoals. These subgoals provide dual physical and visual guidance to the low-level VLA policy, enabling effective task decomposition and sequential planning. Evaluated under identical VLA architectures, the approach dramatically improves task success rates in novel environments from 14% to 69%, substantially outperforming existing baselines and demonstrating exceptional generalization capabilities—particularly in OOD settings.
📝 Abstract
Vision-Language-Action (VLA) models are promising for generalist robot manipulation but remain brittle in out-of-distribution (OOD) settings, especially with limited real-robot data. To resolve the generalization bottleneck, we introduce a hierarchical Vision-Language-Action framework \our{} that leverages the generalization of large-scale pre-trained world model for robust and generalizable VIsual Subgoal TAsk decomposition VISTA. Our hierarchical framework \our{} consists of a world model as the high-level planner and a VLA as the low-level executor. The high-level world model first divides manipulation tasks into subtask sequences with goal images, and the low-level policy follows the textual and visual guidance to generate action sequences. Compared to raw textual goal specification, these synthesized goal images provide visually and physically grounded details for low-level policies, making it feasible to generalize across unseen objects and novel scenarios. We validate both visual goal synthesis and our hierarchical VLA policies in massive out-of-distribution scenarios, and the performance of the same-structured VLA in novel scenarios could boost from 14% to 69% with the guidance generated by the world model. Results demonstrate that our method outperforms previous baselines with a clear margin, particularly in out-of-distribution scenarios. Project page: \href{https://vista-wm.github.io/}{https://vista-wm.github.io}