🤖 AI Summary
Outdoor aerial vision-language navigation (VLN) lacks standardized benchmarks and tooling due to vast operational scales and prohibitive real-world data collection costs.
Method: We introduce the first end-to-end open platform for aerial VLN. Our approach comprises: (1) an automated multi-source data acquisition pipeline integrating Unreal Engine, GTA V, and Google Earth rendering, enhanced by 3D Gaussian Splatting for high-fidelity simulation reconstruction from real scenes; (2) semantic segmentation and automatic trajectory planning modules; and (3) OpenFly-Agent—a navigation model with keyframe memory for long-horizon reasoning.
Contribution/Results: We release a large-scale benchmark of 100K diverse trajectories across 18 scene categories, varying in altitude and path length. Experiments demonstrate that OpenFly-Agent significantly outperforms existing baselines in cross-scene generalization and long-range navigation. The entire platform—including tools, dataset, and model—is open-sourced, establishing foundational infrastructure for aerial embodied intelligence.
📝 Abstract
Vision-Language Navigation (VLN) aims to guide agents through an environment by leveraging both language instructions and visual cues, playing a pivotal role in embodied AI. Indoor VLN has been extensively studied, whereas outdoor aerial VLN remains underexplored. The potential reason is that outdoor aerial view encompasses vast areas, making data collection more challenging, which results in a lack of benchmarks. To address this problem, we propose OpenFly, a platform comprising a versatile toolchain and large-scale benchmark for aerial VLN. Firstly, we develop a highly automated toolchain for data collection, enabling automatic point cloud acquisition, scene semantic segmentation, flight trajectory creation, and instruction generation. Secondly, based on the toolchain, we construct a large-scale aerial VLN dataset with 100k trajectories, covering diverse heights and lengths across 18 scenes. The corresponding visual data are generated using various rendering engines and advanced techniques, including Unreal Engine, GTA V, Google Earth, and 3D Gaussian Splatting (3D GS). All data exhibit high visual quality. Particularly, 3D GS supports real-to-sim rendering, further enhancing the realism of the dataset. Thirdly, we propose OpenFly-Agent, a keyframe-aware VLN model, which takes language instructions, current observations, and historical keyframes as input, and outputs flight actions directly. Extensive analyses and experiments are conducted, showcasing the superiority of our OpenFly platform and OpenFly-Agent. The toolchain, dataset, and codes will be open-sourced.