🤖 AI Summary
Current vision-and-language navigation (VLN) methods rely on large language models (LLMs) with static knowledge, hindering experience accumulation and utilization—thus limiting generalization and evolutionary capability. This paper proposes the first self-evolving multimodal LLM framework tailored for VLN, introducing three key innovations: hierarchical memory, retrieval-augmented reasoning, and automated reflection. These enable experience-driven continual learning and multi-step decision optimization during test-time inference. Crucially, the agent evolves *in situ* while executing navigation tasks in unseen environments, significantly enhancing long-horizon robustness. Evaluated on R2R and REVERSE benchmarks, our method achieves success rates of 57.0% and 35.2%, respectively—representing absolute improvements of 23.9% and 15.0% over prior state-of-the-art. Moreover, performance consistently improves with increasing interaction experience, demonstrating genuine online adaptation.
📝 Abstract
Recent advances in vision-language navigation (VLN) were mainly attributed to emerging large language models (LLMs). These methods exhibited excellent generalization capabilities in instruction understanding and task reasoning. However, they were constrained by the fixed knowledge bases and reasoning abilities of LLMs, preventing fully incorporating experiential knowledge and thus resulting in a lack of efficient evolutionary capacity. To address this, we drew inspiration from the evolution capabilities of natural agents, and proposed a self-evolving VLN framework (SE-VLN) to endow VLN agents with the ability to continuously evolve during testing. To the best of our knowledge, it was the first time that an multimodal LLM-powered self-evolving VLN framework was proposed. Specifically, SE-VLN comprised three core modules, i.e., a hierarchical memory module to transfer successful and failure cases into reusable knowledge, a retrieval-augmented thought-based reasoning module to retrieve experience and enable multi-step decision-making, and a reflection module to realize continual evolution. Comprehensive tests illustrated that the SE-VLN achieved navigation success rates of 57% and 35.2% in unseen environments, representing absolute performance improvements of 23.9% and 15.0% over current state-of-the-art methods on R2R and REVERSE datasets, respectively. Moreover, the SE-VLN showed performance improvement with increasing experience repository, elucidating its great potential as a self-evolving agent framework for VLN.