F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models rely on reactive state-action mappings in dynamic visual environments, resulting in myopic decision-making and limited robustness. To address this, we propose VLA-Foresight, a framework that reformulates action generation as an inverse dynamics problem grounded in visual foresight: actions are planned proactively by predicting goal-directed future visual states. Our method employs a Mixture-of-Transformers architecture integrating perception, visual foresight, and action control modules, trained via a three-stage strategy on 330K cross-task trajectory samples. Evaluated on both real-world and simulation benchmarks, VLA-Foresight achieves significant improvements in task success rates and cross-scenario generalization. Notably, it is the first VLA model to enable language-instructed, embodied foresight—i.e., generating actions based on predicted future visual states—thereby advancing beyond reactive paradigms toward anticipatory, goal-conditioned behavior.

Technology Category

Application Category

📝 Abstract
Executing language-conditioned tasks in dynamic visual environments remains a central challenge in embodied AI. Existing Vision-Language-Action (VLA) models predominantly adopt reactive state-to-action mappings, often leading to short-sighted behaviors and poor robustness in dynamic scenes. In this paper, we introduce F1, a pretrained VLA framework which integrates the visual foresight generation into decision-making pipeline. F1 adopts a Mixture-of-Transformer architecture with dedicated modules for perception, foresight generation, and control, thereby bridging understanding, generation, and actions. At its core, F1 employs a next-scale prediction mechanism to synthesize goal-conditioned visual foresight as explicit planning targets. By forecasting plausible future visual states, F1 reformulates action generation as a foresight-guided inverse dynamics problem, enabling actions that implicitly achieve visual goals. To endow F1 with robust and generalizable capabilities, we propose a three-stage training recipe on an extensive dataset comprising over 330k trajectories across 136 diverse tasks. This training scheme enhances modular reasoning and equips the model with transferable visual foresight, which is critical for complex and dynamic environments. Extensive evaluations on real-world tasks and simulation benchmarks demonstrate F1 consistently outperforms existing approaches, achieving substantial gains in both task success rate and generalization ability.
Problem

Research questions and friction points this paper is trying to address.

Executing language tasks in dynamic visual environments
Overcoming short-sighted reactive behaviors in VLA models
Integrating visual foresight into decision-making for embodied AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates visual foresight generation into decision-making pipeline
Employs next-scale prediction mechanism for explicit planning targets
Reformulates action generation as foresight-guided inverse dynamics problem
🔎 Similar Papers
Q
Qi Lv
Shanghai AI Laboratory
W
Weijie Kong
Shanghai AI Laboratory
H
Hao Li
Shanghai AI Laboratory
J
Jia Zeng
Shanghai AI Laboratory
Z
Zherui Qiu
Shanghai AI Laboratory
Delin Qu
Delin Qu
PhD Candidate of Fudan University
Embodied AI3D VisionMultimodal Generation
H
Haoming Song
Shanghai AI Laboratory
Qizhi Chen
Qizhi Chen
PhD Candidate of Zhejiang University
Multimodal ReasoningEmbodied AI3D Vision
Xiang Deng
Xiang Deng
Scale AI
Machine LearningNLPKnowledge GraphsSemantic Web
J
Jiangmiao Pang
Shanghai AI Laboratory