π€ AI Summary
This work addresses the challenge in existing open-vocabulary aerial detection (OVAD) and remote sensing visual grounding (RSVG) methods, which struggle to simultaneously achieve fine-grained semantic understanding and multi-object detection. To this end, we propose OTA-Det, the first framework that unifies OVAD and RSVG through task reformulation and cross-dataset joint training. Our approach introduces a multi-level dense semantic alignment mechanism, enabling text-driven, multi-object detection from holistic descriptions down to attribute-level granularity. Built upon the RT-DETR architecture, OTA-Det supports efficient dense supervision and multi-granularity semantic alignment. The model achieves state-of-the-art performance across six benchmarks while maintaining real-time inference at 34 FPS.
π Abstract
Open-Vocabulary Aerial Detection (OVAD) and Remote Sensing Visual Grounding (RSVG) have emerged as two key paradigms for aerial scene understanding. However, each paradigm suffers from inherent limitations when operating in isolation: OVAD is restricted to coarse category-level semantics, while RSVG is structurally limited to single-target localization. These limitations prevent existing methods from simultaneously supporting rich semantic understanding and multi-target detection. To address this, we propose OTA-Det, the first unified framework that bridges both paradigms into a cohesive architecture. Specifically, we introduce a task reformulation strategy that unifies task objectives and supervision mechanisms, enabling joint training across datasets from both paradigms with dense supervision signals. Furthermore, we propose a dense semantic alignment strategy that establishes explicit correspondence at multiple granularities, from holistic expressions to individual attributes, enabling fine-grained semantic understanding. To ensure real-time efficiency, OTA-Det builds upon the RT-DETR architecture, extending it from closed-set detection to open-text detection by introducing several high efficient modules, achieving state-of-the-art performance on six benchmarks spanning both OVAD and RSVG tasks while maintaining real-time inference at 34 FPS.