Implicit Geometry Representations for Vision-and-Language Navigation from Web Videos

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of existing vision-and-language navigation methods, which stems from the constrained diversity and scalability of simulation-based datasets. To overcome this, we propose a large-scale video-instruction learning framework that leverages in-the-wild room tour videos from the web. Our approach introduces implicit geometric representations into the task for the first time, enabling direct extraction of spatial cues from RGB frames without relying on fragile explicit 3D reconstructions. By effectively utilizing unlabeled web videos, the model achieves zero-shot spatial reasoning and navigation capabilities. Integrating visual-linguistic alignment with an end-to-end navigation architecture, our method establishes new state-of-the-art results across multiple benchmarks—including CVDN, SOON, R2R, and REVERIE—significantly improving zero-shot navigation performance and robustness.

Technology Category

Application Category

📝 Abstract
Vision-and-Language Navigation (VLN) has long been constrained by the limited diversity and scalability of simulator-curated datasets, which fail to capture the complexity of real-world environments. To overcome this limitation, we introduce a large-scale video-instruction framework derived from web-based room tour videos, enabling agents to learn from natural human walking demonstrations in diverse, realistic indoor settings. Unlike existing datasets, our framework integrates both open-ended description-enriched trajectories and action-enriched trajectories reconstructed in 3D, providing richer spatial and semantic supervision. A key extension in this work is the incorporation of implicit geometry representations, which extract spatial cues directly from RGB frames without requiring fragile 3D reconstruction. This approach substantially improves data utilization, alleviates reconstruction failures, and unlocks large portions of previously unusable video data. Comprehensive experiments across multiple VLN benchmarks (CVDN, SOON, R2R, and REVERIE) demonstrate that our method not only sets new state-of-the-art performance but also enables the development of robust zero-shot navigation agents. By bridging large-scale web videos with implicit spatial reasoning, this work advances embodied navigation towards more scalable, generalizable, and real-world applicable solutions.
Problem

Research questions and friction points this paper is trying to address.

Vision-and-Language Navigation
web videos
real-world environments
data diversity
scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

implicit geometry representation
vision-and-language navigation
web video learning
3D-free spatial reasoning
zero-shot navigation
🔎 Similar Papers
No similar papers found.
Mingfei Han
Mingfei Han
MBZUAI; University of Technology Sydney; Bytedance Seed; MMLab, SIAT
Object RecognitionVideo UnderstandingVision Language ModelsRobotics
H
Haihong Hao
School of Information Science and Technology, University of Science and Technology of China
L
Liang Ma
Department of Computer Vision, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
K
Kamila Zhumakhanova
Department of Computer Vision, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
Ekaterina Radionova
Ekaterina Radionova
Research Scientist, MBZUAI
J
Jingyi Zhang
Shenzhen Campus of Sun Yat-Sen University
Xiaojun Chang
Xiaojun Chang
Director of The ReLER Lab and Professor in Artificial Intelligence, University of Technology Sydney
Computer VisionDeep LearningMultimedia ComputingVideo Analysis
Xiaodan Liang
Xiaodan Liang
Professor of Computer Science, Sun Yat-sen University, MBZUAI, CMU, NUS
Computer visionEmbodied AIMachine learning
Ivan Laptev
Ivan Laptev
Professor at MBZUAI, on leave from INRIA
Computer VisionRoboticsAction RecognitionObject Recognition