🤖 AI Summary
This work addresses the challenge of robot vision-language navigation, which typically relies on costly platform-specific demonstration data and struggles to generalize. To overcome this limitation, the authors propose a modular navigation paradigm that decouples visual planning from execution, enabling zero-shot transfer from unlabeled, open-world videos without any robot demonstrations for the first time. The approach leverages a vision-language model to interpret instructions, a fine-tuned generative video model to predict future trajectories, and an inverse dynamics model to extract actions, which are then executed by a low-level controller. Additionally, the method introduces a scalable, automated data annotation pipeline that facilitates direct training of generalizable navigation policies from real-world videos. Experiments demonstrate significant improvements in zero-shot transfer performance in unseen environments, laying the foundation for general-purpose robots to autonomously learn from open-world visual data.
📝 Abstract
Enabling robots to navigate open-world environments via natural language is critical for general-purpose autonomy. Yet, Vision-Language Navigation has relied on end-to-end policies trained on expensive, embodiment-specific robot data. While recent foundation models trained on vast simulation data show promise, the challenge of scaling and generalizing due to the limited scene diversity and visual fidelity in simulation persists. To address this gap, we propose ImagiNav, a novel modular paradigm that decouples visual planning from robot actuation, enabling the direct utilization of diverse in-the-wild navigation videos. Our framework operates as a hierarchy: a Vision-Language Model first decomposes instructions into textual subgoals; a finetuned generative video model then imagines the future video trajectory towards that subgoal; finally, an inverse dynamics model extracts the trajectory from the imagined video, which can then be tracked via a low-level controller. We additionally develop a scalable data pipeline of in-the-wild navigation videos auto-labeled via inverse dynamics and a pretrained Vision-Language Model. ImagiNav demonstrates strong zero-shot transfer to robot navigation without requiring robot demonstrations, paving the way for generalist robots that learn navigation directly from unlabeled, open-world data.