🤖 AI Summary
This work addresses zero-shot vision-language navigation (VLN) in continuous 3D environments, circumventing the high token cost and data privacy risks associated with proprietary large language models (LLMs) such as GPT-4.
Method: We propose the first end-to-end framework for VLN built exclusively on open-source LLMs. Our approach introduces a spatial-temporal chain-of-thought (CoT) reasoning mechanism that jointly integrates fine-grained object recognition, dynamic scene understanding, natural language instruction parsing, and navigation progress estimation—requiring no domain-specific annotated data. Crucially, explicit modeling of spatial relations and temporal action planning is embedded within the CoT reasoning chain.
Contribution/Results: The method achieves GPT-4–level navigation performance in both simulation and real-world robotic deployment, while reducing token consumption by 87%. It ensures full data locality and end-to-end privacy preservation, significantly enhancing cross-scene generalization without compromising safety or efficiency.
📝 Abstract
Vision-and-Language Navigation (VLN) tasks require an agent to follow textual instructions to navigate through 3D environments. Traditional approaches use supervised learning methods, relying heavily on domain-specific datasets to train VLN models. Recent methods try to utilize closed-source large language models (LLMs) like GPT-4 to solve VLN tasks in zero-shot manners, but face challenges related to expensive token costs and potential data breaches in real-world applications. In this work, we introduce Open-Nav, a novel study that explores open-source LLMs for zero-shot VLN in the continuous environment. Open-Nav employs a spatial-temporal chain-of-thought (CoT) reasoning approach to break down tasks into instruction comprehension, progress estimation, and decision-making. It enhances scene perceptions with fine-grained object and spatial knowledge to improve LLM's reasoning in navigation. Our extensive experiments in both simulated and real-world environments demonstrate that Open-Nav achieves competitive performance compared to using closed-source LLMs.