🤖 AI Summary
This study addresses the challenge of identifying semantic risks—such as crowds or temporary structures—that are often undetectable by conventional geometric methods during unmanned aerial vehicle (UAV) emergency landings. To this end, the authors propose a coarse-to-fine risk assessment framework that integrates remote sensing imagery with multimodal large language models (MLLMs). The approach first employs lightweight semantic segmentation to generate candidate landing zones and then performs fine-grained reasoning by fusing visual features with point-of-interest (POI) data. This work pioneers the application of MLLMs in landing risk evaluation, enabling global awareness of complex semantic hazards and interpretable decision-making. The authors also introduce ELSS, the first public benchmark dataset for emergency landing site selection. Experiments demonstrate that the proposed method significantly outperforms geometry-based baselines on ELSS in terms of risk identification accuracy and produces human-interpretable justifications, thereby enhancing system trustworthiness.
📝 Abstract
Safe UAV emergency landing requires more than just identifying flat terrain; it demands understanding complex semantic risks (e.g., crowds, temporary structures) invisible to traditional geometric sensors. In this paper, we propose a novel framework leveraging Remote Sensing (RS) imagery and Multimodal Large Language Models (MLLMs) for global context-aware landing site assessment. Unlike local geometric methods, our approach employs a coarse-to-fine pipeline: first, a lightweight semantic segmentation module efficiently pre-screens candidate areas; second, a vision-language reasoning agent fuses visual features with Point-of-Interest (POI) data to detect subtle hazards. To validate this approach, we construct and release the Emergency Landing Site Selection (ELSS) benchmark. Experiments demonstrate that our framework significantly outperforms geometric baselines in risk identification accuracy. Furthermore, qualitative results confirm its ability to generate human-like, interpretable justifications, enhancing trust in automated decision-making. The benchmark dataset is publicly accessible at https://anonymous.4open.science/r/ELSS-dataset-43D7.