🤖 AI Summary
Traditional vision-language-action models for navigation are limited by data scarcity and static representations, struggling to capture temporal dynamics and physical constraints. This work proposes NavDreamer, a novel framework that, for the first time, leverages a generative video model as a universal interface between language instructions and navigation trajectories. By encoding spatiotemporal structure and physical dynamics into video representations, NavDreamer enables zero-shot 3D navigation. The approach integrates visual-language model-based trajectory scoring, sampling-based optimization, and inverse dynamics decoding to produce executable waypoints. Experiments demonstrate that NavDreamer exhibits strong generalization to unseen environments and novel objects, validating the efficacy of video generation as a foundation for high-level navigation decision-making.
📝 Abstract
Previous Vision-Language-Action models face critical limitations in navigation: scarce, diverse data from labor-intensive collection and static representations that fail to capture temporal dynamics and physical laws. We propose NavDreamer, a video-based framework for 3D navigation that leverages generative video models as a universal interface between language instructions and navigation trajectories. Our main hypothesis is that video's ability to encode spatiotemporal information and physical dynamics, combined with internet-scale availability, enables strong zero-shot generalization in navigation. To mitigate the stochasticity of generative predictions, we introduce a sampling-based optimization method that utilizes a VLM for trajectory scoring and selection. An inverse dynamics model is employed to decode executable waypoints from generated video plans for navigation. To systematically evaluate this paradigm in several video model backbones, we introduce a comprehensive benchmark covering object navigation, precise navigation, spatial grounding, language control, and scene reasoning. Extensive experiments demonstrate robust generalization across novel objects and unseen environments, with ablation studies revealing that navigation's high-level decision-making nature makes it particularly suited for video-based planning.