🤖 AI Summary
Existing visual-language navigation (VLN) methods for continuous 3D environments suffer from two key limitations: weak spatial awareness in waypoint prediction and insufficient historical reasoning and adaptive backtracking capabilities in navigators. To address these, we propose VL-NCE—the first zero-shot VLN framework for continuous environments. Our method introduces an occupancy-aware loss-guided enhanced waypoint predictor to improve geometric perception accuracy, and a multimodal large language model (MLLM)-based navigator that integrates reinforced visual encoding, masked cross-modal attention, and explicit history state modeling—enabling dynamic path re-planning. Evaluated on R2R-CE and MP3D, VL-NCE achieves zero-shot state-of-the-art performance, matching fully supervised approaches. Real-world validation on a TurtleBot 4 platform further demonstrates strong generalization and environmental adaptability. This work establishes a new paradigm for zero-shot, geometry-aware, and history-conditioned VLN in continuous 3D spaces.
📝 Abstract
Vision-and-Language Navigation (VLN) in continuous environments requires agents to interpret natural language instructions while navigating unconstrained 3D spaces. Existing VLN-CE frameworks rely on a two-stage approach: a waypoint predictor to generate waypoints and a navigator to execute movements. However, current waypoint predictors struggle with spatial awareness, while navigators lack historical reasoning and backtracking capabilities, limiting adaptability. We propose a zero-shot VLN-CE framework integrating an enhanced waypoint predictor with a Multi-modal Large Language Model (MLLM)-based navigator. Our predictor employs a stronger vision encoder, masked cross-attention fusion, and an occupancy-aware loss for better waypoint quality. The navigator incorporates history-aware reasoning and adaptive path planning with backtracking, improving robustness. Experiments on R2R-CE and MP3D benchmarks show our method achieves state-of-the-art (SOTA) performance in zero-shot settings, demonstrating competitive results compared to fully supervised methods. Real-world validation on Turtlebot 4 further highlights its adaptability.