Multispectral State-Space Feature Fusion: Bridging Shared and Cross-Parametric Interactions for Object Detection

📅 2025-07-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address two key bottlenecks in multispectral object detection—(i) excessive reliance on local complementary features and insufficient cross-modal semantic sharing, and (ii) the trade-off between receptive field size and computational complexity—this paper proposes MS²Fusion, a novel fusion framework built upon state space models (SSMs). MS²Fusion introduces a first-of-its-kind dual-path parameter interaction mechanism that jointly models cross-modal shared semantics and modality-specific complementary features. Furthermore, it incorporates cross-modal hidden-state decoding and parameter sharing under joint embedding to achieve efficient, scalable feature fusion. Extensive experiments demonstrate that MS²Fusion achieves significant performance gains over state-of-the-art methods on standard benchmarks including FLIR, M3FD, and LLVIP. Moreover, the framework exhibits strong generalization capability, delivering competitive results on downstream RGB-T semantic segmentation and salient object detection tasks.

Technology Category

Application Category

📝 Abstract
Modern multispectral feature fusion for object detection faces two critical limitations: (1) Excessive preference for local complementary features over cross-modal shared semantics adversely affects generalization performance; and (2) The trade-off between the receptive field size and computational complexity present critical bottlenecks for scalable feature modeling. Addressing these issues, a novel Multispectral State-Space Feature Fusion framework, dubbed MS2Fusion, is proposed based on the state space model (SSM), achieving efficient and effective fusion through a dual-path parametric interaction mechanism. More specifically, the first cross-parameter interaction branch inherits the advantage of cross-attention in mining complementary information with cross-modal hidden state decoding in SSM. The second shared-parameter branch explores cross-modal alignment with joint embedding to obtain cross-modal similar semantic features and structures through parameter sharing in SSM. Finally, these two paths are jointly optimized with SSM for fusing multispectral features in a unified framework, allowing our MS2Fusion to enjoy both functional complementarity and shared semantic space. In our extensive experiments on mainstream benchmarks including FLIR, M3FD and LLVIP, our MS2Fusion significantly outperforms other state-of-the-art multispectral object detection methods, evidencing its superiority. Moreover, MS2Fusion is general and applicable to other multispectral perception tasks. We show that, even without specific design, MS2Fusion achieves state-of-the-art results on RGB-T semantic segmentation and RGBT salient object detection, showing its generality. The source code will be available at https://github.com/61s61min/MS2Fusion.git.
Problem

Research questions and friction points this paper is trying to address.

Improving cross-modal shared semantics in object detection
Balancing receptive field size and computational complexity
Enhancing multispectral feature fusion for better generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-path parametric interaction mechanism
Cross-modal hidden state decoding
Joint embedding for semantic alignment
🔎 Similar Papers
No similar papers found.
Jifeng Shen
Jifeng Shen
Jiangsu University
Computer Vision
H
Haibo Zhan
School of Electrical and Information Engineering, Jiangsu University, Zhenjiang, 212013, China
Shaohua Dong
Shaohua Dong
University of North Texas
Computer Vision
X
Xin Zuo
School of Computer Science and Engineering, Jiangsu University of Science and Technology, Zhenjiang, 212003, China
W
Wankou Yang
School of Automation, Southeast University, Nanjing, 210096, China
Haibin Ling
Haibin Ling
Chair Professor, Westlake University
computer visionaugmented realitymedical image analysismachine learningAI for science