SM3Det: A Unified Model for Multi-Modal Remote Sensing Object Detection

πŸ“… 2024-12-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing remote sensing object detection models are constrained by unimodal, single-task paradigms, limiting their generalization across heterogeneous modalities (e.g., optical, SAR, infrared) and diverse bounding box representations (horizontal and rotated). To address this, we propose M2Det, a novel multimodal multitask detection framework. M2Det introduces a grid-level sparse Mixture-of-Experts (MoE) backbone that enables cross-modal feature disentanglement and unified representation learning. It further incorporates a dynamic learning rate–driven consistency constraint and task-synchronized optimization strategy to jointly enhance detection accuracy and cross-modal compatibility. Extensive experiments on multiple benchmark remote sensing datasets demonstrate that M2Det consistently outperforms modality-specific detectors, achieving significant improvements in cross-sensor generalization and robustness to varying annotation formats (e.g., horizontal vs. rotated boxes). The source code is publicly available.

Technology Category

Application Category

πŸ“ Abstract
With the rapid advancement of remote sensing technology, high-resolution multi-modal imagery is now more widely accessible. Conventional Object detection models are trained on a single dataset, often restricted to a specific imaging modality and annotation format. However, such an approach overlooks the valuable shared knowledge across multi-modalities and limits the model's applicability in more versatile scenarios. This paper introduces a new task called Multi-Modal Datasets and Multi-Task Object Detection (M2Det) for remote sensing, designed to accurately detect horizontal or oriented objects from any sensor modality. This task poses challenges due to 1) the trade-offs involved in managing multi-modal modelling and 2) the complexities of multi-task optimization. To address these, we establish a benchmark dataset and propose a unified model, SM3Det (Single Model for Multi-Modal datasets and Multi-Task object Detection). SM3Det leverages a grid-level sparse MoE backbone to enable joint knowledge learning while preserving distinct feature representations for different modalities. Furthermore, it integrates a consistency and synchronization optimization strategy using dynamic learning rate adjustment, allowing it to effectively handle varying levels of learning difficulty across modalities and tasks. Extensive experiments demonstrate SM3Det's effectiveness and generalizability, consistently outperforming specialized models on individual datasets. The code is available at https://github.com/zcablii/SM3Det.
Problem

Research questions and friction points this paper is trying to address.

Object Recognition
Multiview Datasets
Remote Sensing Images
Innovation

Methods, ideas, or system contributions that make the work stand out.

SM3Det
Multi-task Object Detection
Adaptive Learning Rate Optimization
πŸ”Ž Similar Papers
No similar papers found.
Y
Yuxuan Li
VCIP Lab, Computer Science, NKU
X
Xiang Li
VCIP Lab, Computer Science, NKU, NKIARI, Futian, Shenzhen
Yunheng Li
Yunheng Li
Nankai University
Computer Vision
Y
Yicheng Zhang
VCIP Lab, Computer Science, NKU
Y
Yimian Dai
VCIP Lab, Computer Science, NKU
Qibin Hou
Qibin Hou
Nankai University
Deep learningComputer visionVisual attention
Ming-Ming Cheng
Ming-Ming Cheng
Professor of Computer Science, Nankai University
Computer VisionComputer GraphicsVisual AttentionSaliency
J
Jian Yang
VCIP Lab, Computer Science, NKU, NKIARI, Futian, Shenzhen