🤖 AI Summary
This work addresses the inherent limitation of visual-language navigation (VLN) caused by partial observability, where a single agent can only leverage information from locations it has personally visited. To overcome this constraint, the authors propose Co-VLN, a novel framework that systematically investigates the benefits of peer observation in VLN for the first time. Co-VLN introduces a lightweight, model-agnostic collaboration mechanism that enables multiple parallel agents to share structured perceptual memory, thereby effectively expanding their collective perceptual horizon without incurring additional exploration costs. The approach is compatible with both learning-based (DUET) and zero-shot (MapGPT) navigation paradigms, achieving significant performance gains on the R2R benchmark and demonstrating the efficacy of visual sharing in collaborative embodied navigation.
📝 Abstract
Vision-Language Navigation (VLN) systems are fundamentally constrained by partial observability, as an agent can only accumulate knowledge from locations it has personally visited. As multiple robots increasingly coexist in shared environments, a natural question arises: can agents navigating the same space benefit from each other's observations? In this work, we introduce Co-VLN, a minimalist, model-agnostic framework for systematically investigating whether and how peer observations from concurrently navigating agents can benefit VLN. When independently navigating agents identify common traversed locations, they exchange structured perceptual memory, effectively expanding each agent's receptive field at no additional exploration cost. We validate our framework on the R2R benchmark under two representative paradigms (the learning-based DUET and the zero-shot MapGPT), and conduct extensive analytical experiments to systematically reveal the underlying dynamics of peer observation sharing in VLN. Results demonstrate that vision-sharing enabled model yields substantial performance improvements across both paradigms, establishing a strong foundation for future research in collaborative embodied navigation.