🤖 AI Summary
Existing vision-based navigation methods decouple planning from world modeling, leading to state-action misalignment and poor adaptability to dynamic or unseen environments. To address this, we propose UniWM, a unified world model that integrates egocentric visual forecasting and action decision-making within a multimodal autoregressive architecture—the first to achieve such deep fusion. UniWM introduces a hierarchical memory mechanism that jointly models short-term perceptual cues and long-term trajectory context, enabling coherent, long-horizon embodied imagination and reasoning. The model is trained end-to-end in a fully self-supervised manner, requiring no explicit annotations. Evaluated on four standard benchmarks, UniWM achieves up to a 30% absolute improvement in navigation success rate and significantly reduces trajectory error. Notably, it demonstrates strong zero-shot generalization on the unseen TartanDrive dataset, validating its robust adaptability to dynamic and novel environments.
📝 Abstract
Enabling embodied agents to effectively imagine future states is critical for robust and generalizable visual navigation. Current state-of-the-art approaches, however, adopt modular architectures that separate navigation planning from visual world modeling, leading to state-action misalignment and limited adaptability in novel or dynamic scenarios. To overcome this fundamental limitation, we propose UniWM, a unified, memory-augmented world model integrating egocentric visual foresight and planning within a single multimodal autoregressive backbone. Unlike modular frameworks, UniWM explicitly grounds action decisions in visually imagined outcomes, ensuring tight alignment between prediction and control. A hierarchical memory mechanism further integrates detailed short-term perceptual cues with longer-term trajectory context, enabling stable, coherent reasoning over extended horizons. Extensive experiments across four challenging benchmarks (Go Stanford, ReCon, SCAND, HuRoN) demonstrate that UniWM substantially improves navigation success rates by up to 30%, significantly reduces trajectory errors compared to strong baselines, and exhibits impressive zero-shot generalization on the unseen TartanDrive dataset. These results highlight UniWM as a principled step toward unified, imagination-driven embodied navigation.