🤖 AI Summary
To address three key bottlenecks in Vision-and-Language Navigation (VLN)—weak spatial reasoning, poor cross-modal alignment, and long-horizon task memory overload—this paper proposes MSNav, a tri-module collaborative framework: (1) a memory module with dynamic node pruning to construct lightweight, efficient topological map representations; (2) Qwen-Sp, a specialized spatial reasoning model derived from fine-tuning Qwen3-4B to explicitly model instruction-object-spatial relational structures; and (3) an LLM-driven path planner that jointly optimizes semantic and geometric constraints. MSNav is the first framework to deeply integrate structured spatial reasoning with large language model–based path planning. Evaluated on R2R and REVERIE benchmarks, it achieves state-of-the-art performance, significantly improving Success Rate (SR) and Success-weighted by Path Length (SPL). On the I-O-S test set, it surpasses leading commercial LLMs in both F1 score and Normalized Discounted Cumulative Gain (NDCG).
📝 Abstract
Vision-and-Language Navigation (VLN) requires an agent to interpret natural language instructions and navigate complex environments. Current approaches often adopt a "black-box" paradigm, where a single Large Language Model (LLM) makes end-to-end decisions. However, it is plagued by critical vulnerabilities, including poor spatial reasoning, weak cross-modal grounding, and memory overload in long-horizon tasks. To systematically address these issues, we propose Memory Spatial Navigation(MSNav), a framework that fuses three modules into a synergistic architecture, which transforms fragile inference into a robust, integrated intelligence. MSNav integrates three modules: Memory Module, a dynamic map memory module that tackles memory overload through selective node pruning, enhancing long-range exploration; Spatial Module, a module for spatial reasoning and object relationship inference that improves endpoint recognition; and Decision Module, a module using LLM-based path planning to execute robust actions. Powering Spatial Module, we also introduce an Instruction-Object-Space (I-O-S) dataset and fine-tune the Qwen3-4B model into Qwen-Spatial (Qwen-Sp), which outperforms leading commercial LLMs in object list extraction, achieving higher F1 and NDCG scores on the I-O-S test set. Extensive experiments on the Room-to-Room (R2R) and REVERIE datasets demonstrate MSNav's state-of-the-art performance with significant improvements in Success Rate (SR) and Success weighted by Path Length (SPL).