Learning Where to Focus: Density-Driven Guidance for Detecting Dense Tiny Objects

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Detecting dense, tiny objects in high-resolution remote sensing imagery remains challenging due to severe occlusion and extreme pixel sparsity. To address this, we propose DRMNet, a density-driven adaptive focusing network. Its core contributions are threefold: (1) a novel density map generation branch that explicitly models object spatial distribution; (2) a density-aware local-global focusing mechanism that dynamically allocates computational resources toward high-density regions; and (3) a dual-filter fusion module based on the discrete cosine transform (DCT), enabling frequency-domain decoupling and density-guided cross-scale feature interaction. Evaluated on AI-TOD and DTOD benchmarks, DRMNet achieves significant improvements over state-of-the-art methods—particularly in high-density and heavily occluded scenarios—yielding substantial mAP gains. These results validate the effectiveness and robustness of the density-driven paradigm for tiny object detection in remote sensing imagery.

Technology Category

Application Category

📝 Abstract
High-resolution remote sensing imagery increasingly contains dense clusters of tiny objects, the detection of which is extremely challenging due to severe mutual occlusion and limited pixel footprints. Existing detection methods typically allocate computational resources uniformly, failing to adaptively focus on these density-concentrated regions, which hinders feature learning effectiveness. To address these limitations, we propose the Dense Region Mining Network (DRMNet), which leverages density maps as explicit spatial priors to guide adaptive feature learning. First, we design a Density Generation Branch (DGB) to model object distribution patterns, providing quantifiable priors that guide the network toward dense regions. Second, to address the computational bottleneck of global attention, our Dense Area Focusing Module (DAFM) uses these density maps to identify and focus on dense areas, enabling efficient local-global feature interaction. Finally, to mitigate feature degradation during hierarchical extraction, we introduce a Dual Filter Fusion Module (DFFM). It disentangles multi-scale features into high- and low-frequency components using a discrete cosine transform and then performs density-guided cross-attention to enhance complementarity while suppressing background interference. Extensive experiments on the AI-TOD and DTOD datasets demonstrate that DRMNet surpasses state-of-the-art methods, particularly in complex scenarios with high object density and severe occlusion.
Problem

Research questions and friction points this paper is trying to address.

Detects dense tiny objects in high-resolution remote sensing imagery
Addresses mutual occlusion and limited pixel footprints in detection
Improves feature learning by focusing on density-concentrated regions adaptively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Density maps guide adaptive feature learning
Density maps enable efficient local-global interaction
Density-guided cross-attention enhances multi-scale features
🔎 Similar Papers
No similar papers found.