🤖 AI Summary
To address the insufficient multi-scale local feature extraction capability of Vision Transformers (ViTs) for densely packed small targets in Synthetic Aperture Radar (SAR) imagery, this paper proposes the Density-Aware Adaptive Vision Transformer (DA-ViT). The method introduces two key innovations: (1) a novel Density-Aware Module (DAM) that explicitly models spatial target distribution density and generates a density tensor; and (2) a Density-Enhanced Fusion Module (DEFM) that enables density-guided collaborative optimization between CNN-derived local features and Transformer-based global features. Critically, DA-ViT leverages density priors without requiring additional annotations, thereby enhancing discriminability for small targets. Experimental results demonstrate state-of-the-art performance, achieving 79.8% and 92.5% mAP on the RSDD and SIVED SAR datasets, respectively—substantially outperforming existing SAR object detection approaches.
📝 Abstract
Vision Transformer (ViT) has achieved remarkable results in object detection for synthetic aperture radar (SAR) images, owing to its exceptional ability to extract global features. However, it struggles with the extraction of multi-scale local features, leading to limited performance in detecting small targets, especially when they are densely arranged. Therefore, we propose Density-Sensitive Vision Transformer with Adaptive Tokens (DenSe-AdViT) for dense SAR target detection. We design a Density-Aware Module (DAM) as a preliminary component that generates a density tensor based on target distribution. It is guided by a meticulously crafted objective metric, enabling precise and effective capture of the spatial distribution and density of objects. To integrate the multi-scale information enhanced by convolutional neural networks (CNNs) with the global features derived from the Transformer, Density-Enhanced Fusion Module (DEFM) is proposed. It effectively refines attention toward target-survival regions with the assist of density mask and the multiple sources features. Notably, our DenSe-AdViT achieves 79.8% mAP on the RSDD dataset and 92.5% on the SIVED dataset, both of which feature a large number of densely distributed vehicle targets.