ViSA-Enhanced Aerial VLN: A Visual-Spatial Reasoning Enhanced Framework for Aerial Vision-Language Navigation

๐Ÿ“… 2026-03-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing vision-and-language navigation (VLN) approaches for aerial agents rely on detect-then-plan pipelines, which struggle to effectively handle spatial reasoning and linguistic ambiguity. This work proposes a novel three-stage collaborative architecture that, for the first time, integrates structured visual prompts with vision-language models (VLMs) to enable end-to-end visuo-spatial reasoning directly in the image planeโ€”without requiring additional training or complex intermediate representations. Evaluated on the CityNav benchmark, the method achieves a 70.3% relative improvement in success rate over the current best fully trained approach, demonstrating substantially enhanced spatial understanding and highlighting its strong potential as a backbone for aerial VLN systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Existing aerial Vision-Language Navigation (VLN) methods predominantly adopt a detection-and-planning pipeline, which converts open-vocabulary detections into discrete textual scene graphs. These approaches are plagued by inadequate spatial reasoning capabilities and inherent linguistic ambiguities. To address these bottlenecks, we propose a Visual-Spatial Reasoning (ViSA) enhanced framework for aerial VLN. Specifically, a triple-phase collaborative architecture is designed to leverage structured visual prompting, enabling Vision-Language Models (VLMs) to perform direct reasoning on image planes without the need for additional training or complex intermediate representations. Comprehensive evaluations on the CityNav benchmark demonstrate that the ViSA-enhanced VLN achieves a 70.3\% improvement in success rate compared to the fully trained state-of-the-art (SOTA) method, elucidating its great potential as a backbone for aerial VLN systems.
Problem

Research questions and friction points this paper is trying to address.

Aerial Vision-Language Navigation
spatial reasoning
linguistic ambiguity
visual-spatial reasoning
scene understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual-Spatial Reasoning
Aerial Vision-Language Navigation
Structured Visual Prompting
Vision-Language Models
Detection-and-Planning Pipeline
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Haoyu Tong
Tianmushan Laboratory, Beihang University, Hangzhou 311115, China; Hangzhou International Innovation Institute, Beihang University, Hangzhou 311115, China
Xiangyu Dong
Xiangyu Dong
Staff Software Engineer, Google
Computer architecture
X
Xiaoguang Ma
Foshan Graduate School of Innovation, Northeastern University, Foshan, China; Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110819, China
H
Haoran Zhao
School of Aeronautic Science and Engineering, Beihang University, Beijing 100191, China; qingniaoAI
Y
Yaoming Zhou
School of Aeronautic Science and Engineering, Beihang University, Beijing 100191, China
C
Chenghao Lin
Tianmushan Laboratory, Beihang University, Hangzhou 311115, China; Hangzhou International Innovation Institute, Beihang University, Hangzhou 311115, China