Open-Nav: Exploring Zero-Shot Vision-and-Language Navigation in Continuous Environment with Open-Source LLMs

📅 2024-09-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses zero-shot vision-language navigation (VLN) in continuous 3D environments, circumventing the high token cost and data privacy risks associated with proprietary large language models (LLMs) such as GPT-4. Method: We propose the first end-to-end framework for VLN built exclusively on open-source LLMs. Our approach introduces a spatial-temporal chain-of-thought (CoT) reasoning mechanism that jointly integrates fine-grained object recognition, dynamic scene understanding, natural language instruction parsing, and navigation progress estimation—requiring no domain-specific annotated data. Crucially, explicit modeling of spatial relations and temporal action planning is embedded within the CoT reasoning chain. Contribution/Results: The method achieves GPT-4–level navigation performance in both simulation and real-world robotic deployment, while reducing token consumption by 87%. It ensures full data locality and end-to-end privacy preservation, significantly enhancing cross-scene generalization without compromising safety or efficiency.

Technology Category

Application Category

📝 Abstract
Vision-and-Language Navigation (VLN) tasks require an agent to follow textual instructions to navigate through 3D environments. Traditional approaches use supervised learning methods, relying heavily on domain-specific datasets to train VLN models. Recent methods try to utilize closed-source large language models (LLMs) like GPT-4 to solve VLN tasks in zero-shot manners, but face challenges related to expensive token costs and potential data breaches in real-world applications. In this work, we introduce Open-Nav, a novel study that explores open-source LLMs for zero-shot VLN in the continuous environment. Open-Nav employs a spatial-temporal chain-of-thought (CoT) reasoning approach to break down tasks into instruction comprehension, progress estimation, and decision-making. It enhances scene perceptions with fine-grained object and spatial knowledge to improve LLM's reasoning in navigation. Our extensive experiments in both simulated and real-world environments demonstrate that Open-Nav achieves competitive performance compared to using closed-source LLMs.
Problem

Research questions and friction points this paper is trying to address.

Zero-shot vision-language navigation in continuous environments.
Utilizing open-source LLMs for efficient navigation tasks.
Reducing token costs and data breach risks in VLN.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses open-source LLMs
Implements spatial-temporal CoT reasoning
Enhances scene perception with knowledge
🔎 Similar Papers
No similar papers found.
Yanyuan Qiao
Yanyuan Qiao
Postdoctoral Research Fellow, EPFL
Embodied-AIVision and LanguageMulti-modal Learning
Wenqi Lyu
Wenqi Lyu
The university of Adelaide
Embodied-AI
H
Hui Wang
Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
Zixu Wang
Zixu Wang
Technical University of Munich & Infineon Technologies AG.
Deep learningLLMSoftware engineeringAutonomous driving
Zerui Li
Zerui Li
Adelaide Univeristy
RoboticsComputer VisionEmbodied AI
Y
Yuan Zhang
Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
Mingkui Tan
Mingkui Tan
South China University of Technology
Machine LearningLarge-scale Optimization
Q
Qi Wu
Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia