Semantically Aware UAV Landing Site Assessment from Remote Sensing Imagery via Multimodal Large Language Models

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of identifying semantic risks—such as crowds or temporary structures—that are often undetectable by conventional geometric methods during unmanned aerial vehicle (UAV) emergency landings. To this end, the authors propose a coarse-to-fine risk assessment framework that integrates remote sensing imagery with multimodal large language models (MLLMs). The approach first employs lightweight semantic segmentation to generate candidate landing zones and then performs fine-grained reasoning by fusing visual features with point-of-interest (POI) data. This work pioneers the application of MLLMs in landing risk evaluation, enabling global awareness of complex semantic hazards and interpretable decision-making. The authors also introduce ELSS, the first public benchmark dataset for emergency landing site selection. Experiments demonstrate that the proposed method significantly outperforms geometry-based baselines on ELSS in terms of risk identification accuracy and produces human-interpretable justifications, thereby enhancing system trustworthiness.

Technology Category

Application Category

📝 Abstract
Safe UAV emergency landing requires more than just identifying flat terrain; it demands understanding complex semantic risks (e.g., crowds, temporary structures) invisible to traditional geometric sensors. In this paper, we propose a novel framework leveraging Remote Sensing (RS) imagery and Multimodal Large Language Models (MLLMs) for global context-aware landing site assessment. Unlike local geometric methods, our approach employs a coarse-to-fine pipeline: first, a lightweight semantic segmentation module efficiently pre-screens candidate areas; second, a vision-language reasoning agent fuses visual features with Point-of-Interest (POI) data to detect subtle hazards. To validate this approach, we construct and release the Emergency Landing Site Selection (ELSS) benchmark. Experiments demonstrate that our framework significantly outperforms geometric baselines in risk identification accuracy. Furthermore, qualitative results confirm its ability to generate human-like, interpretable justifications, enhancing trust in automated decision-making. The benchmark dataset is publicly accessible at https://anonymous.4open.science/r/ELSS-dataset-43D7.
Problem

Research questions and friction points this paper is trying to address.

UAV landing
semantic risk
remote sensing imagery
emergency landing
hazard detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Large Language Models
Remote Sensing Imagery
Semantic Landing Site Assessment
Vision-Language Reasoning
Emergency UAV Landing
🔎 Similar Papers
No similar papers found.
C
Chunliang Hua
School of Information Science and Engineering, Southeast University, Nanjing 211189, China
Zeyuan Yang
Zeyuan Yang
University of Massachusetts, Amherst
Lei Zhang
Lei Zhang
International Digital Economy Academy (IDEA)
Computer VisionMultimediaMachine Learning
J
Jiayang Sun
LASER, International Digital Economy Academy, Shenzhen 510085, China
F
Fengwen Chen
LASER, International Digital Economy Academy, Shenzhen 510085, China
C
Chunlan Zeng
Department of Electronic Engineering, East China Normal University, Shanghai 200241, China
X
Xiao Hu
LASER, International Digital Economy Academy, Shenzhen 510085, China