🤖 AI Summary
Poor detection performance for tiny objects in aerial imagery stems from shallow feature degradation and scale-imbalance-induced bias in regression loss. To address this, we propose the Scale-Aware Relay Layer (SARL) and Scale-Adaptive Loss (SAL). SARL enhances shallow feature propagation via cross-scale spatial-channel attention, while SAL dynamically adjusts regression weights per object scale and is compatible with both anchor-based and anchor-free detectors. Integrated into YOLOv5 and YOLOX, our method achieves an average precision gain of 5.5% across benchmarks including AI-TOD and DOTA-v2.0; notably, it attains 29.0% AP on AI-TOD-v2.0. The approach significantly improves detection accuracy, generalization capability, and robustness to noise for tiny objects in aerial images.
📝 Abstract
Recently, despite the remarkable advancements in object detection, modern detectors still struggle to detect tiny objects in aerial images. One key reason is that tiny objects carry limited features that are inevitably degraded or lost during long-distance network propagation. Another is that smaller objects receive disproportionately greater regression penalties than larger ones during training. To tackle these issues, we propose a Scale-Aware Relay Layer (SARL) and a Scale-Adaptive Loss (SAL) for tiny object detection, both of which are seamlessly compatible with the top-performing frameworks. Specifically, SARL employs a cross-scale spatial-channel attention to progressively enrich the meaningful features of each layer and strengthen the cross-layer feature sharing. SAL reshapes the vanilla IoU-based losses so as to dynamically assign lower weights to larger objects. This loss is able to focus training on tiny objects while reducing the influence on large objects. Extensive experiments are conducted on three benchmarks ( extit{i.e.,} AI-TOD, DOTA-v2.0 and VisDrone2019), and the results demonstrate that the proposed method boosts the generalization ability by 5.5% Average Precision (AP) when embedded in YOLOv5 (anchor-based) and YOLOx (anchor-free) baselines. Moreover, it also promotes the robust performance with 29.0% AP on the real-world noisy dataset ( extit{i.e.,} AI-TOD-v2.0).