🤖 AI Summary
Existing low-altitude vision systems struggle to support anomaly understanding in urban governance due to their reliance on object-centric perception paradigms and loosely coupled vision-language pipelines. To address this limitation, this work introduces GovLA-10K, the first low-altitude multimodal benchmark tailored to governance requirements, along with a unified reasoning framework, GovLA-Reasoner. The proposed framework incorporates an implicit feature alignment mechanism that enables deep integration of fine-grained visual grounding and high-level linguistic reasoning without requiring fine-tuning of either the visual detector or the large language model. Furthermore, it presents the first functionally salient object annotation scheme specifically designed for governance tasks. Experimental results demonstrate that the method significantly outperforms existing approaches across multiple governance-related tasks, confirming its effectiveness and practical utility.
📝 Abstract
Low-altitude vision systems are becoming a critical infrastructure for smart city governance. However, existing object-centric perception paradigms and loosely coupled vision-language pipelines are still difficult to support management-oriented anomaly understanding required in real-world urban governance. To bridge this gap, we introduce GovLA-10K, the first management-oriented multi-modal benchmark for low-altitude intelligence, along with GovLA-Reasoner, a unified vision-language reasoning framework tailored for governance-aware aerial perception. Unlike existing studies that aim to exhaustively annotate all visible objects, GovLA-10K is deliberately designed around functionally salient targets that directly correspond to practical management needs, and further provides actionable management suggestions grounded in these observations. To effectively coordinate the fine-grained visual grounding with high-level contextual language reasoning, GovLA-Reasoner introduces an efficient feature adapter that implicitly coordinates discriminative representation sharing between the visual detector and the large language model (LLM). Extensive experiments show that our method significantly improves performance while avoiding the need of fine-tuning for any task-specific individual components. We believe our work offers a new perspective and foundation for future studies on management-aware low-altitude vision-language systems.