DenSe-AdViT: A novel Vision Transformer for Dense SAR Object Detection

📅 2025-04-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient multi-scale local feature extraction capability of Vision Transformers (ViTs) for densely packed small targets in Synthetic Aperture Radar (SAR) imagery, this paper proposes the Density-Aware Adaptive Vision Transformer (DA-ViT). The method introduces two key innovations: (1) a novel Density-Aware Module (DAM) that explicitly models spatial target distribution density and generates a density tensor; and (2) a Density-Enhanced Fusion Module (DEFM) that enables density-guided collaborative optimization between CNN-derived local features and Transformer-based global features. Critically, DA-ViT leverages density priors without requiring additional annotations, thereby enhancing discriminability for small targets. Experimental results demonstrate state-of-the-art performance, achieving 79.8% and 92.5% mAP on the RSDD and SIVED SAR datasets, respectively—substantially outperforming existing SAR object detection approaches.

Technology Category

Application Category

📝 Abstract
Vision Transformer (ViT) has achieved remarkable results in object detection for synthetic aperture radar (SAR) images, owing to its exceptional ability to extract global features. However, it struggles with the extraction of multi-scale local features, leading to limited performance in detecting small targets, especially when they are densely arranged. Therefore, we propose Density-Sensitive Vision Transformer with Adaptive Tokens (DenSe-AdViT) for dense SAR target detection. We design a Density-Aware Module (DAM) as a preliminary component that generates a density tensor based on target distribution. It is guided by a meticulously crafted objective metric, enabling precise and effective capture of the spatial distribution and density of objects. To integrate the multi-scale information enhanced by convolutional neural networks (CNNs) with the global features derived from the Transformer, Density-Enhanced Fusion Module (DEFM) is proposed. It effectively refines attention toward target-survival regions with the assist of density mask and the multiple sources features. Notably, our DenSe-AdViT achieves 79.8% mAP on the RSDD dataset and 92.5% on the SIVED dataset, both of which feature a large number of densely distributed vehicle targets.
Problem

Research questions and friction points this paper is trying to address.

Improves small target detection in dense SAR images
Enhances multi-scale local feature extraction using ViT
Integrates CNN multi-scale info with Transformer global features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Density-Aware Module for target distribution analysis
Density-Enhanced Fusion Module for multi-scale integration
Vision Transformer with adaptive tokens for SAR detection
🔎 Similar Papers
No similar papers found.
Y
Yang Zhang
School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijng, China
Jingyi Cao
Jingyi Cao
Beijing University of Posts and Telecommunications
Y
Yanan You
School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijng, China
Y
Yuanyuan Qiao
School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijng, China