🤖 AI Summary
This work addresses the limited generalization of existing vision-and-language navigation methods, which stems from the constrained diversity and scalability of simulation-based datasets. To overcome this, we propose a large-scale video-instruction learning framework that leverages in-the-wild room tour videos from the web. Our approach introduces implicit geometric representations into the task for the first time, enabling direct extraction of spatial cues from RGB frames without relying on fragile explicit 3D reconstructions. By effectively utilizing unlabeled web videos, the model achieves zero-shot spatial reasoning and navigation capabilities. Integrating visual-linguistic alignment with an end-to-end navigation architecture, our method establishes new state-of-the-art results across multiple benchmarks—including CVDN, SOON, R2R, and REVERIE—significantly improving zero-shot navigation performance and robustness.
📝 Abstract
Vision-and-Language Navigation (VLN) has long been constrained by the limited diversity and scalability of simulator-curated datasets, which fail to capture the complexity of real-world environments. To overcome this limitation, we introduce a large-scale video-instruction framework derived from web-based room tour videos, enabling agents to learn from natural human walking demonstrations in diverse, realistic indoor settings. Unlike existing datasets, our framework integrates both open-ended description-enriched trajectories and action-enriched trajectories reconstructed in 3D, providing richer spatial and semantic supervision. A key extension in this work is the incorporation of implicit geometry representations, which extract spatial cues directly from RGB frames without requiring fragile 3D reconstruction. This approach substantially improves data utilization, alleviates reconstruction failures, and unlocks large portions of previously unusable video data. Comprehensive experiments across multiple VLN benchmarks (CVDN, SOON, R2R, and REVERIE) demonstrate that our method not only sets new state-of-the-art performance but also enables the development of robust zero-shot navigation agents. By bridging large-scale web videos with implicit spatial reasoning, this work advances embodied navigation towards more scalable, generalizable, and real-world applicable solutions.