🤖 AI Summary
Long-horizon embodied navigation under complex natural language instructions remains challenging—particularly in unseen environments—due to poor long-term planning robustness and high task failure rates.
Method: This paper proposes a unified vision-language world model (VLM) framework that, for the first time, tightly integrates explicit linguistic planning with implicit spatiotemporal prediction within a single architecture, establishing a closed-loop “perceive–plan/predict–act” paradigm. It supports hierarchical subgoal decomposition and joint modeling of short- and long-term environmental dynamics, incorporating instruction understanding, history-aware observation encoding, generative world modeling, and dual-timescale future scene prediction.
Contribution/Results: Evaluated on R2R-CE and RxR-CE benchmarks, our approach achieves state-of-the-art success weighted by path length (SPL) of 62.3% and 48.7%, respectively, significantly improving both success rate and robustness. The framework provides a novel, interpretable, and generalizable VLM paradigm for long-horizon embodied navigation.
📝 Abstract
Embodied navigation for long-horizon tasks, guided by complex natural language instructions, remains a formidable challenge in artificial intelligence. Existing agents often struggle with robust long-term planning about unseen environments, leading to high failure rates. To address these limitations, we introduce NavForesee, a novel Vision-Language Model (VLM) that unifies high-level language planning and predictive world model imagination within a single, unified framework. Our approach empowers a single VLM to concurrently perform planning and predictive foresight. Conditioned on the full instruction and historical observations, the model is trained to understand the navigation instructions by decomposing the task, tracking its progress, and formulating the subsequent sub-goal. Simultaneously, it functions as a generative world model, providing crucial foresight by predicting short-term environmental dynamics and long-term navigation milestones. The VLM's structured plan guides its targeted prediction, while the imagined future provides rich context to inform the navigation actions, creating a powerful internal feedback loop of perception-planning/prediction-action. We demonstrate through extensive experiments on the R2R-CE and RxR-CE benchmark that NavForesee achieves highly competitive performance in complex scenarios. Our work highlights the immense potential of fusing explicit language planning with implicit spatiotemporal prediction, paving the way for more intelligent and capable embodied agents.