Towards Governance-Oriented Low-Altitude Intelligence: A Management-Centric Multi-Modal Benchmark With Implicitly Coordinated Vision-Language Reasoning Framework

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing low-altitude vision systems struggle to support anomaly understanding in urban governance due to their reliance on object-centric perception paradigms and loosely coupled vision-language pipelines. To address this limitation, this work introduces GovLA-10K, the first low-altitude multimodal benchmark tailored to governance requirements, along with a unified reasoning framework, GovLA-Reasoner. The proposed framework incorporates an implicit feature alignment mechanism that enables deep integration of fine-grained visual grounding and high-level linguistic reasoning without requiring fine-tuning of either the visual detector or the large language model. Furthermore, it presents the first functionally salient object annotation scheme specifically designed for governance tasks. Experimental results demonstrate that the method significantly outperforms existing approaches across multiple governance-related tasks, confirming its effectiveness and practical utility.

Technology Category

Application Category

📝 Abstract
Low-altitude vision systems are becoming a critical infrastructure for smart city governance. However, existing object-centric perception paradigms and loosely coupled vision-language pipelines are still difficult to support management-oriented anomaly understanding required in real-world urban governance. To bridge this gap, we introduce GovLA-10K, the first management-oriented multi-modal benchmark for low-altitude intelligence, along with GovLA-Reasoner, a unified vision-language reasoning framework tailored for governance-aware aerial perception. Unlike existing studies that aim to exhaustively annotate all visible objects, GovLA-10K is deliberately designed around functionally salient targets that directly correspond to practical management needs, and further provides actionable management suggestions grounded in these observations. To effectively coordinate the fine-grained visual grounding with high-level contextual language reasoning, GovLA-Reasoner introduces an efficient feature adapter that implicitly coordinates discriminative representation sharing between the visual detector and the large language model (LLM). Extensive experiments show that our method significantly improves performance while avoiding the need of fine-tuning for any task-specific individual components. We believe our work offers a new perspective and foundation for future studies on management-aware low-altitude vision-language systems.
Problem

Research questions and friction points this paper is trying to address.

low-altitude intelligence
smart city governance
vision-language reasoning
management-oriented perception
anomaly understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

governance-oriented perception
low-altitude intelligence
vision-language reasoning
multi-modal benchmark
implicit coordination
🔎 Similar Papers
No similar papers found.
Hao Chang
Hao Chang
Peking University
NeuroscienceGut brain axis
Z
Zhihui Wang
Zidong Taichu (Beijing) Technology Co., Ltd., China
L
Lingxiang Wu
Zidong Taichu (Beijing) Technology Co., Ltd., China
Peijin Wang
Peijin Wang
Aerospace Information Research Institute, Chinese Academy of Sciences
foundation modelremote sensingdeep learning
Wenhui Diao
Wenhui Diao
Aerospace Information Research Institute, Chinese Academy of Sciences
Object Detection
J
Jinqiao Wang
Zidong Taichu (Beijing) Technology Co., Ltd., China