ImagiNav: Scalable Embodied Navigation via Generative Visual Prediction and Inverse Dynamics

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of robot vision-language navigation, which typically relies on costly platform-specific demonstration data and struggles to generalize. To overcome this limitation, the authors propose a modular navigation paradigm that decouples visual planning from execution, enabling zero-shot transfer from unlabeled, open-world videos without any robot demonstrations for the first time. The approach leverages a vision-language model to interpret instructions, a fine-tuned generative video model to predict future trajectories, and an inverse dynamics model to extract actions, which are then executed by a low-level controller. Additionally, the method introduces a scalable, automated data annotation pipeline that facilitates direct training of generalizable navigation policies from real-world videos. Experiments demonstrate significant improvements in zero-shot transfer performance in unseen environments, laying the foundation for general-purpose robots to autonomously learn from open-world visual data.

Technology Category

Application Category

📝 Abstract
Enabling robots to navigate open-world environments via natural language is critical for general-purpose autonomy. Yet, Vision-Language Navigation has relied on end-to-end policies trained on expensive, embodiment-specific robot data. While recent foundation models trained on vast simulation data show promise, the challenge of scaling and generalizing due to the limited scene diversity and visual fidelity in simulation persists. To address this gap, we propose ImagiNav, a novel modular paradigm that decouples visual planning from robot actuation, enabling the direct utilization of diverse in-the-wild navigation videos. Our framework operates as a hierarchy: a Vision-Language Model first decomposes instructions into textual subgoals; a finetuned generative video model then imagines the future video trajectory towards that subgoal; finally, an inverse dynamics model extracts the trajectory from the imagined video, which can then be tracked via a low-level controller. We additionally develop a scalable data pipeline of in-the-wild navigation videos auto-labeled via inverse dynamics and a pretrained Vision-Language Model. ImagiNav demonstrates strong zero-shot transfer to robot navigation without requiring robot demonstrations, paving the way for generalist robots that learn navigation directly from unlabeled, open-world data.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Navigation
Embodied Navigation
Open-world Generalization
Scalable Robot Learning
Zero-shot Transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Visual Prediction
Inverse Dynamics
Vision-Language Navigation
Zero-shot Transfer
Embodied Navigation
🔎 Similar Papers
No similar papers found.
J
Jie Chen
Department of Mechanical Engineering, National University of Singapore, Singapore; Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore
Y
Yuxin Cai
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore; Nanyang Technological University (NTU), Singapore
Yizhuo Wang
Yizhuo Wang
Ph.D. student, National University of Singapore
robot learningpath planningreinforcement learning
R
Ruofei Bai
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore; Nanyang Technological University (NTU), Singapore
Yuhong Cao
Yuhong Cao
National University of Singapore
Robot learningPath Planing
J
Jun Li
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore
Y
Yau Wei Yun
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore
Guillaume Sartoretti
Guillaume Sartoretti
Assistant Professor, National University of Singapore (NUS), Mechanical Engineering Dpt
Multi-Agent SystemsRoboticsSwarm IntelligenceDistributed ControlDistributed Learning