๐ค AI Summary
Existing vision-and-language navigation (VLN) approaches for aerial agents rely on detect-then-plan pipelines, which struggle to effectively handle spatial reasoning and linguistic ambiguity. This work proposes a novel three-stage collaborative architecture that, for the first time, integrates structured visual prompts with vision-language models (VLMs) to enable end-to-end visuo-spatial reasoning directly in the image planeโwithout requiring additional training or complex intermediate representations. Evaluated on the CityNav benchmark, the method achieves a 70.3% relative improvement in success rate over the current best fully trained approach, demonstrating substantially enhanced spatial understanding and highlighting its strong potential as a backbone for aerial VLN systems.
๐ Abstract
Existing aerial Vision-Language Navigation (VLN) methods predominantly adopt a detection-and-planning pipeline, which converts open-vocabulary detections into discrete textual scene graphs. These approaches are plagued by inadequate spatial reasoning capabilities and inherent linguistic ambiguities. To address these bottlenecks, we propose a Visual-Spatial Reasoning (ViSA) enhanced framework for aerial VLN. Specifically, a triple-phase collaborative architecture is designed to leverage structured visual prompting, enabling Vision-Language Models (VLMs) to perform direct reasoning on image planes without the need for additional training or complex intermediate representations. Comprehensive evaluations on the CityNav benchmark demonstrate that the ViSA-enhanced VLN achieves a 70.3\% improvement in success rate compared to the fully trained state-of-the-art (SOTA) method, elucidating its great potential as a backbone for aerial VLN systems.