🤖 AI Summary
This work addresses the inefficiency of large language models (LLMs) in vision-and-language navigation, which stems from repetitive instruction parsing and redundant action candidates. The authors propose a retrieval-augmented framework that enhances navigation performance without fine-tuning the LLM, leveraging a two-tier lightweight retrieval mechanism. At the instruction level, successful trajectories are retrieved as in-context examples to provide global guidance; at the step level, imitation learning is used to prune irrelevant action candidates, thereby refining local decision-making. The approach integrates embedding-based retrieval, in-context learning, and modular design, enabling decoupled training from the LLM. Evaluated on the R2R benchmark, the method significantly improves success rate, oracle success rate, and path-length-weighted success rate, demonstrating strong effectiveness and generalization across both seen and unseen environments.
📝 Abstract
Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions and navigate through previously unseen environments. Recent approaches increasingly employ large language models (LLMs) as high-level navigators due to their flexibility and reasoning capability. However, prompt-based LLM navigation often suffers from inefficient decision-making, as the model must repeatedly interpret instructions from scratch and reason over noisy and verbose navigable candidates at each step. In this paper, we propose a retrieval-augmented framework to improve the efficiency and stability of LLM-based VLN without modifying or fine-tuning the underlying language model. Our approach introduces retrieval at two complementary levels. At the episode level, an instruction-level embedding retriever selects semantically similar successful navigation trajectories as in-context exemplars, providing task-specific priors for instruction grounding. At the step level, an imitation-learned candidate retriever prunes irrelevant navigable directions before LLM inference, reducing action ambiguity and prompt complexity. Both retrieval modules are lightweight, modular, and trained independently of the LLM. We evaluate our method on the Room-to-Room (R2R) benchmark. Experimental results demonstrate consistent improvements in Success Rate, Oracle Success Rate, and SPL on both seen and unseen environments. Ablation studies further show that instruction-level exemplar retrieval and candidate pruning contribute complementary benefits to global guidance and step-wise decision efficiency. These results indicate that retrieval-augmented decision support is an effective and scalable strategy for enhancing LLM-based vision-and-language navigation.