Spatial-VLN: Zero-Shot Vision-and-Language Navigation With Explicit Spatial Perception and Exploration

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of zero-shot vision-and-language navigation (VLN) in complex continuous environments—such as inadequate spatial awareness leading to failures in door interaction, multi-room navigation, and instruction ambiguity—by proposing the Spatial-VLN framework. Spatial-VLN introduces an explicit spatial perception module and a query-based active exploration mechanism, integrating panoramic filtering, specialized door and region experts, parallel large language model (LLM) reasoning, and value-driven waypoint sampling to form a closed-loop perception–reasoning–exploration pipeline. Notably, it achieves state-of-the-art performance on the VLN-CE benchmark using only low-cost LLMs, while demonstrating strong generalization and robustness in real-world environments.

Technology Category

Application Category

📝 Abstract
Zero-shot Vision-and-Language Navigation (VLN) agents leveraging Large Language Models (LLMs) excel in generalization but suffer from insufficient spatial perception. Focusing on complex continuous environments, we categorize key perceptual bottlenecks into three spatial challenges: door interaction,multi-room navigation, and ambiguous instruction execution, where existing methods consistently suffer high failure rates. We present Spatial-VLN, a perception-guided exploration framework designed to overcome these challenges. The framework consists of two main modules. The Spatial Perception Enhancement (SPE) module integrates panoramic filtering with specialized door and region experts to produce spatially coherent, cross-view consistent perceptual representations. Building on this foundation, our Explored Multi-expert Reasoning (EMR) module uses parallel LLM experts to address waypoint-level semantics and region-level spatial transitions. When discrepancies arise between expert predictions, a query-and-explore mechanism is activated, prompting the agent to actively probe critical areas and resolve perceptual ambiguities. Experiments on VLN-CE demonstrate that Spatial VLN achieves state-of-the-art performance using only low-cost LLMs. Furthermore, to validate real-world applicability, we introduce a value-based waypoint sampling strategy that effectively bridges the Sim2Real gap. Extensive real-world evaluations confirm that our framework delivers superior generalization and robustness in complex environments. Our codes and videos are available at https://yueluhhxx.github.io/Spatial-VLN-web/.
Problem

Research questions and friction points this paper is trying to address.

Vision-and-Language Navigation
Zero-shot
Spatial Perception
Continuous Environments
Navigation Failure
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatial Perception
Zero-Shot VLN
Multi-expert Reasoning
Query-and-Explore
Sim2Real Transfer
L
Lu Yue
Robotics and Control Laboratory, School of Advanced Manufacturing and Robotics, and the State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing, 100871, China; also with the Defense Innovation Institute, Academy of Military Sciences, Beijing 100071, China, and Tianjin Artificial Intelligence Innovation Center, Tianjin 300450, China
Y
Yue Fan
Robotics and Control Laboratory, School of Advanced Manufacturing and Robotics, and the State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing, 100871, China
S
Shiwei Lian
Robotics and Control Laboratory, School of Advanced Manufacturing and Robotics, and the State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing, 100871, China
Yu Zhao
Yu Zhao
Harbin Institute of Technology (Shenzhen)
natural language processingmultimedia
J
Jiaxin Yu
Defense Innovation Institute, Academy of Military Sciences, Beijing 100071, China, and Tianjin Artificial Intelligence Innovation Center, Tianjin 300450, China
Liang Xie
Liang Xie
Wuhan University of Technology
Time Series ForecastingCross-modal Learning
Feitian Zhang
Feitian Zhang
Associate Professor, Peking University
Underwater VehiclesAerial VehiclesBioinspired RoboticsControl SystemsArtificial Intelligence