🤖 AI Summary
Existing zero-shot vision-language navigation (VLN) approaches rely on panoramic observations and two-stage waypoint prediction, resulting in high latency and poor deployability. This paper proposes the first end-to-end framework that eliminates panoramic input, directly predicting actions from only three forward-looking RGB-D frames and natural language instructions. Our method introduces: (1) an uncertainty-aware reasoning mechanism integrating a disambiguation module with future–past bidirectional temporal modeling to enhance decision robustness and long-horizon planning capability; and (2) multimodal large language model (MLLM)-based cross-modal alignment. Evaluated in both simulation and on real robotic platforms, our approach achieves significantly lower per-step latency while matching or surpassing panoramic baseline performance—marking the first demonstration of low-latency, deployable zero-shot VLN.
📝 Abstract
Recent advances in Vision-and-Language Navigation in Continuous Environments (VLN-CE) have leveraged multimodal large language models (MLLMs) to achieve zero-shot navigation. However, existing methods often rely on panoramic observations and two-stage pipelines involving waypoint predictors, which introduce significant latency and limit real-world applicability. In this work, we propose Fast-SmartWay, an end-to-end zero-shot VLN-CE framework that eliminates the need for panoramic views and waypoint predictors. Our approach uses only three frontal RGB-D images combined with natural language instructions, enabling MLLMs to directly predict actions. To enhance decision robustness, we introduce an Uncertainty-Aware Reasoning module that integrates (i) a Disambiguation Module for avoiding local optima, and (ii) a Future-Past Bidirectional Reasoning mechanism for globally coherent planning. Experiments on both simulated and real-robot environments demonstrate that our method significantly reduces per-step latency while achieving competitive or superior performance compared to panoramic-view baselines. These results demonstrate the practicality and effectiveness of Fast-SmartWay for real-world zero-shot embodied navigation.