VLingNav: Embodied Navigation with Adaptive Reasoning and Visual-Assisted Linguistic Memory

📅 2026-01-13
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing vision-language-action (VLA) models in complex, long-horizon navigation tasks, where the absence of explicit reasoning and persistent memory hinders performance in dynamic environments with strong spatial dependencies. Inspired by human dual-process cognition, the authors propose VLingNav, a novel framework featuring an adaptive chain-of-thought mechanism that dynamically triggers deliberative reasoning and a vision-augmented linguistic memory module enabling cross-modal semantic retention and long-term spatial inference. The contributions include the first VLA architecture capable of zero-shot transfer to real-world robots, the Nav-AdaCoT-2.9M dataset—a large-scale navigation benchmark with reasoning annotations—and an online expert-guided reinforcement learning strategy. VLingNav achieves state-of-the-art performance across multiple embodied navigation benchmarks and demonstrates exceptional generalization across domains and tasks.

Technology Category

Application Category

📝 Abstract
VLA models have shown promising potential in embodied navigation by unifying perception and planning while inheriting the strong generalization abilities of large VLMs. However, most existing VLA models rely on reactive mappings directly from observations to actions, lacking the explicit reasoning capabilities and persistent memory required for complex, long-horizon navigation tasks. To address these challenges, we propose VLingNav, a VLA model for embodied navigation grounded in linguistic-driven cognition. First, inspired by the dual-process theory of human cognition, we introduce an adaptive chain-of-thought mechanism, which dynamically triggers explicit reasoning only when necessary, enabling the agent to fluidly switch between fast, intuitive execution and slow, deliberate planning. Second, to handle long-horizon spatial dependencies, we develop a visual-assisted linguistic memory module that constructs a persistent, cross-modal semantic memory, enabling the agent to recall past observations to prevent repetitive exploration and infer movement trends for dynamic environments. For the training recipe, we construct Nav-AdaCoT-2.9M, the largest embodied navigation dataset with reasoning annotations to date, enriched with adaptive CoT annotations that induce a reasoning paradigm capable of adjusting both when to think and what to think about. Moreover, we incorporate an online expert-guided reinforcement learning stage, enabling the model to surpass pure imitation learning and to acquire more robust, self-explored navigation behaviors. Extensive experiments demonstrate that VLingNav achieves state-of-the-art performance across a wide range of embodied navigation benchmarks. Notably, VLingNav transfers to real-world robotic platforms in a zero-shot manner, executing various navigation tasks and demonstrating strong cross-domain and cross-task generalization.
Problem

Research questions and friction points this paper is trying to address.

embodied navigation
reasoning
memory
VLA models
long-horizon tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive chain-of-thought
visual-assisted linguistic memory
embodied navigation
cross-modal semantic memory
zero-shot transfer
🔎 Similar Papers
No similar papers found.