NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning

πŸ“… 2024-03-12
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 14
✨ Influential: 2
πŸ“„ PDF
πŸ€– AI Summary
To address the poor reasoning accuracy and weak generalization of large language models (LLMs) in vision-language navigation (VLN), stemming from significant domain misalignment between their pretraining and embodied navigation tasks, this paper proposes Navigation Chain-of-Thought (NavCoT)β€”a novel paradigm for domain adaptation. NavCoT employs parameter-efficient fine-tuning (e.g., LoRA) to guide LLMs to autonomously construct a decoupled, stepwise reasoning chain: β€œimagined observation β†’ candidate alignment β†’ action decision.” Technically, it integrates world-model-inspired imagination prompting, candidate-observation alignment selection, and three-stage Chain-of-Thought supervision. Evaluated on R2R, RxR, and R4R benchmarks, NavCoT substantially outperforms direct action-prediction baselines; with only lightweight fine-tuning, it improves over GPT-4 by ~7% on R2R. The method is self-guided, interpretable, and computationally efficient. Code is publicly available.

Technology Category

Application Category

πŸ“ Abstract
Vision-and-Language Navigation (VLN), as a crucial research problem of Embodied AI, requires an embodied agent to navigate through complex 3D environments following natural language instructions. Recent research has highlighted the promising capacity of large language models (LLMs) in VLN by improving navigational reasoning accuracy and interpretability. However, their predominant use in an offline manner usually suffers from substantial domain gap between the VLN task and the LLM training corpus. This paper introduces a novel strategy called Navigational Chain-of-Thought (NavCoT), where we fulfill parameter-efficient in-domain training to enable self-guided navigational decision, leading to a significant mitigation of the domain gap in a cost-effective manner. Specifically, at each timestep, the LLM is prompted to forecast the navigational chain-of-thought by: 1) acting as a world model to imagine the next observation according to the instruction, 2) selecting the candidate observation that best aligns with the imagination, and 3) determining the action based on the reasoning from the prior steps. Through constructing formalized labels for training, the LLM can learn to generate desired and reasonable chain-of-thought outputs for improving the action decision. Experimental results across various training settings and popular VLN benchmarks (e.g., Room-to-Room (R2R), Room-across-Room (RxR), Room-for-Room (R4R)) show the significant superiority of NavCoT over the direct action prediction variants. Through simple parameter-efficient finetuning, our NavCoT outperforms a recent GPT4-based approach with ~7% relative improvement on the R2R dataset. We believe that NavCoT will help unlock more task-adaptive and scalable LLM-based embodied agents, which are helpful for developing real-world robotics applications. Code is available at https://github.com/expectorlin/NavCoT.
Problem

Research questions and friction points this paper is trying to address.

Reducing domain gap in Vision-and-Language Navigation (VLN) tasks
Enhancing LLM-based navigational reasoning accuracy and interpretability
Enabling self-guided decision-making via parameter-efficient in-domain training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient in-domain training for VLN
Self-guided navigational decision via NavCoT
World model imagination aligns with observation
πŸ”Ž Similar Papers
No similar papers found.