Unified World Models: Memory-Augmented Planning and Foresight for Visual Navigation

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-based navigation methods decouple planning from world modeling, leading to state-action misalignment and poor adaptability to dynamic or unseen environments. To address this, we propose UniWM, a unified world model that integrates egocentric visual forecasting and action decision-making within a multimodal autoregressive architecture—the first to achieve such deep fusion. UniWM introduces a hierarchical memory mechanism that jointly models short-term perceptual cues and long-term trajectory context, enabling coherent, long-horizon embodied imagination and reasoning. The model is trained end-to-end in a fully self-supervised manner, requiring no explicit annotations. Evaluated on four standard benchmarks, UniWM achieves up to a 30% absolute improvement in navigation success rate and significantly reduces trajectory error. Notably, it demonstrates strong zero-shot generalization on the unseen TartanDrive dataset, validating its robust adaptability to dynamic and novel environments.

Technology Category

Application Category

📝 Abstract
Enabling embodied agents to effectively imagine future states is critical for robust and generalizable visual navigation. Current state-of-the-art approaches, however, adopt modular architectures that separate navigation planning from visual world modeling, leading to state-action misalignment and limited adaptability in novel or dynamic scenarios. To overcome this fundamental limitation, we propose UniWM, a unified, memory-augmented world model integrating egocentric visual foresight and planning within a single multimodal autoregressive backbone. Unlike modular frameworks, UniWM explicitly grounds action decisions in visually imagined outcomes, ensuring tight alignment between prediction and control. A hierarchical memory mechanism further integrates detailed short-term perceptual cues with longer-term trajectory context, enabling stable, coherent reasoning over extended horizons. Extensive experiments across four challenging benchmarks (Go Stanford, ReCon, SCAND, HuRoN) demonstrate that UniWM substantially improves navigation success rates by up to 30%, significantly reduces trajectory errors compared to strong baselines, and exhibits impressive zero-shot generalization on the unseen TartanDrive dataset. These results highlight UniWM as a principled step toward unified, imagination-driven embodied navigation.
Problem

Research questions and friction points this paper is trying to address.

Unifying visual world modeling with navigation planning
Addressing state-action misalignment in dynamic environments
Improving long-horizon reasoning for visual navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified memory-augmented world model integrating foresight and planning
Hierarchical memory mechanism combines short and long-term context
Autoregressive backbone grounds actions in visually imagined outcomes
🔎 Similar Papers
No similar papers found.
Yifei Dong
Yifei Dong
KTH Royal Institute of Technology
Robotic manipulation
Fengyi Wu
Fengyi Wu
Unknown affiliation
G
Guangyu Chen
University of Washington
Zhi-Qi Cheng
Zhi-Qi Cheng
Assistant Professor @ UW | Graduate Faculty | Ex-CMU, Google, Microsoft | Intel & IBM PhD Fellowship
multimedia processingmultimedia understandingmultimodal foundation model
Q
Qiyu Hu
University of Washington
Y
Yuxuan Zhou
University of Washington
J
Jingdong Sun
Carnegie Mellon University
Jun-Yan He
Jun-Yan He
Tongyi Lab, Alibaba Group
Multimedia ComputingComputer Vision
Q
Qi Dai
Microsoft Research
A
Alexander G. Hauptmann
Carnegie Mellon University