MSNav: Zero-Shot Vision-and-Language Navigation with Dynamic Memory and LLM Spatial Reasoning

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three key bottlenecks in Vision-and-Language Navigation (VLN)—weak spatial reasoning, poor cross-modal alignment, and long-horizon task memory overload—this paper proposes MSNav, a tri-module collaborative framework: (1) a memory module with dynamic node pruning to construct lightweight, efficient topological map representations; (2) Qwen-Sp, a specialized spatial reasoning model derived from fine-tuning Qwen3-4B to explicitly model instruction-object-spatial relational structures; and (3) an LLM-driven path planner that jointly optimizes semantic and geometric constraints. MSNav is the first framework to deeply integrate structured spatial reasoning with large language model–based path planning. Evaluated on R2R and REVERIE benchmarks, it achieves state-of-the-art performance, significantly improving Success Rate (SR) and Success-weighted by Path Length (SPL). On the I-O-S test set, it surpasses leading commercial LLMs in both F1 score and Normalized Discounted Cumulative Gain (NDCG).

Technology Category

Application Category

📝 Abstract
Vision-and-Language Navigation (VLN) requires an agent to interpret natural language instructions and navigate complex environments. Current approaches often adopt a "black-box" paradigm, where a single Large Language Model (LLM) makes end-to-end decisions. However, it is plagued by critical vulnerabilities, including poor spatial reasoning, weak cross-modal grounding, and memory overload in long-horizon tasks. To systematically address these issues, we propose Memory Spatial Navigation(MSNav), a framework that fuses three modules into a synergistic architecture, which transforms fragile inference into a robust, integrated intelligence. MSNav integrates three modules: Memory Module, a dynamic map memory module that tackles memory overload through selective node pruning, enhancing long-range exploration; Spatial Module, a module for spatial reasoning and object relationship inference that improves endpoint recognition; and Decision Module, a module using LLM-based path planning to execute robust actions. Powering Spatial Module, we also introduce an Instruction-Object-Space (I-O-S) dataset and fine-tune the Qwen3-4B model into Qwen-Spatial (Qwen-Sp), which outperforms leading commercial LLMs in object list extraction, achieving higher F1 and NDCG scores on the I-O-S test set. Extensive experiments on the Room-to-Room (R2R) and REVERIE datasets demonstrate MSNav's state-of-the-art performance with significant improvements in Success Rate (SR) and Success weighted by Path Length (SPL).
Problem

Research questions and friction points this paper is trying to address.

Addresses poor spatial reasoning in vision-language navigation
Solves memory overload in long-horizon navigation tasks
Improves cross-modal grounding between language and visual inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic memory module with selective node pruning
Spatial reasoning module with fine-tuned Qwen-Sp model
LLM-based decision module for robust path planning
🔎 Similar Papers
No similar papers found.
C
Chenghao Liu
School of Advanced Manufacturing and Robotics, Peking University
Z
Zhimu Zhou
School of Advanced Manufacturing and Robotics, Peking University
J
Jiachen Zhang
School of Advanced Manufacturing and Robotics, Peking University
M
Minghao Zhang
Institute for Network Sciences and Cyberspace, Tsinghua University
Songfang Huang
Songfang Huang
Peking University, Alibaba DAMO, IBM Research, The University of Edinburgh
LLMEmbodied AI
H
Huiling Duan
School of Advanced Manufacturing and Robotics, Peking University