NavDreamer: Video Models as Zero-Shot 3D Navigators

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional vision-language-action models for navigation are limited by data scarcity and static representations, struggling to capture temporal dynamics and physical constraints. This work proposes NavDreamer, a novel framework that, for the first time, leverages a generative video model as a universal interface between language instructions and navigation trajectories. By encoding spatiotemporal structure and physical dynamics into video representations, NavDreamer enables zero-shot 3D navigation. The approach integrates visual-language model-based trajectory scoring, sampling-based optimization, and inverse dynamics decoding to produce executable waypoints. Experiments demonstrate that NavDreamer exhibits strong generalization to unseen environments and novel objects, validating the efficacy of video generation as a foundation for high-level navigation decision-making.

Technology Category

Application Category

📝 Abstract
Previous Vision-Language-Action models face critical limitations in navigation: scarce, diverse data from labor-intensive collection and static representations that fail to capture temporal dynamics and physical laws. We propose NavDreamer, a video-based framework for 3D navigation that leverages generative video models as a universal interface between language instructions and navigation trajectories. Our main hypothesis is that video's ability to encode spatiotemporal information and physical dynamics, combined with internet-scale availability, enables strong zero-shot generalization in navigation. To mitigate the stochasticity of generative predictions, we introduce a sampling-based optimization method that utilizes a VLM for trajectory scoring and selection. An inverse dynamics model is employed to decode executable waypoints from generated video plans for navigation. To systematically evaluate this paradigm in several video model backbones, we introduce a comprehensive benchmark covering object navigation, precise navigation, spatial grounding, language control, and scene reasoning. Extensive experiments demonstrate robust generalization across novel objects and unseen environments, with ablation studies revealing that navigation's high-level decision-making nature makes it particularly suited for video-based planning.
Problem

Research questions and friction points this paper is trying to address.

3D navigation
zero-shot generalization
vision-language-action models
temporal dynamics
physical laws
Innovation

Methods, ideas, or system contributions that make the work stand out.

video-based navigation
zero-shot generalization
generative video models
trajectory optimization
inverse dynamics model
🔎 Similar Papers
No similar papers found.