🤖 AI Summary
Existing vision-language-action (VLA) models rely on reactive state-action mappings in dynamic visual environments, resulting in myopic decision-making and limited robustness. To address this, we propose VLA-Foresight, a framework that reformulates action generation as an inverse dynamics problem grounded in visual foresight: actions are planned proactively by predicting goal-directed future visual states. Our method employs a Mixture-of-Transformers architecture integrating perception, visual foresight, and action control modules, trained via a three-stage strategy on 330K cross-task trajectory samples. Evaluated on both real-world and simulation benchmarks, VLA-Foresight achieves significant improvements in task success rates and cross-scenario generalization. Notably, it is the first VLA model to enable language-instructed, embodied foresight—i.e., generating actions based on predicted future visual states—thereby advancing beyond reactive paradigms toward anticipatory, goal-conditioned behavior.
📝 Abstract
Executing language-conditioned tasks in dynamic visual environments remains a central challenge in embodied AI. Existing Vision-Language-Action (VLA) models predominantly adopt reactive state-to-action mappings, often leading to short-sighted behaviors and poor robustness in dynamic scenes. In this paper, we introduce F1, a pretrained VLA framework which integrates the visual foresight generation into decision-making pipeline. F1 adopts a Mixture-of-Transformer architecture with dedicated modules for perception, foresight generation, and control, thereby bridging understanding, generation, and actions. At its core, F1 employs a next-scale prediction mechanism to synthesize goal-conditioned visual foresight as explicit planning targets. By forecasting plausible future visual states, F1 reformulates action generation as a foresight-guided inverse dynamics problem, enabling actions that implicitly achieve visual goals. To endow F1 with robust and generalizable capabilities, we propose a three-stage training recipe on an extensive dataset comprising over 330k trajectories across 136 diverse tasks. This training scheme enhances modular reasoning and equips the model with transferable visual foresight, which is critical for complex and dynamic environments. Extensive evaluations on real-world tasks and simulation benchmarks demonstrate F1 consistently outperforms existing approaches, achieving substantial gains in both task success rate and generalization ability.