DTU-Net: A Multi-Scale Dilated Transformer Network for Nonlinear Hyperspectral Unmixing

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Transformer-based unmixing networks struggle to model multi-scale long-range spatial dependencies and nonlinear mixing effects. To address this, we propose DTU-Net—a multi-scale dilated Transformer network for nonlinear hyperspectral unmixing. Its key contributions are: (1) a novel Multi-Scale Dilated Attention (MSDA) mechanism that overcomes local receptive field limitations and jointly captures long-range spatial-spectral correlations; (2) a physically interpretable decoder grounded in the Polynomial Post-Nonlinear Mixing Model (PPNMM), explicitly encoding polynomial nonlinear mixing relationships; and (3) an integrated architecture combining 3D-CNNs, channel-wise attention, and spatial-spectral feature interaction modules for synergistic spatial-spectral optimization. Evaluated on both synthetic and real-world datasets, DTU-Net consistently outperforms PPNMM-based methods and state-of-the-art deep unmixing networks, achieving significant improvements in abundance estimation accuracy while maintaining strong generalizability and physical interpretability.

Technology Category

Application Category

📝 Abstract
Transformers have shown significant success in hyperspectral unmixing (HU). However, challenges remain. While multi-scale and long-range spatial correlations are essential in unmixing tasks, current Transformer-based unmixing networks, built on Vision Transformer (ViT) or Swin-Transformer, struggle to capture them effectively. Additionally, current Transformer-based unmixing networks rely on the linear mixing model, which lacks the flexibility to accommodate scenarios where nonlinear effects are significant. To address these limitations, we propose a multi-scale Dilated Transformer-based unmixing network for nonlinear HU (DTU-Net). The encoder employs two branches. The first one performs multi-scale spatial feature extraction using Multi-Scale Dilated Attention (MSDA) in the Dilated Transformer, which varies dilation rates across attention heads to capture long-range and multi-scale spatial correlations. The second one performs spectral feature extraction utilizing 3D-CNNs with channel attention. The outputs from both branches are then fused to integrate multi-scale spatial and spectral information, which is subsequently transformed to estimate the abundances. The decoder is designed to accommodate both linear and nonlinear mixing scenarios. Its interpretability is enhanced by explicitly modeling the relationships between endmembers, abundances, and nonlinear coefficients in accordance with the polynomial post-nonlinear mixing model (PPNMM). Experiments on synthetic and real datasets validate the effectiveness of the proposed DTU-Net compared to PPNMM-derived methods and several advanced unmixing networks.
Problem

Research questions and friction points this paper is trying to address.

Captures multi-scale and long-range spatial correlations in hyperspectral unmixing.
Addresses limitations of linear mixing models in nonlinear hyperspectral unmixing scenarios.
Proposes DTU-Net for integrating spatial and spectral features effectively.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Scale Dilated Attention captures spatial correlations
3D-CNNs with channel attention extract spectral features
Decoder models linear and nonlinear mixing scenarios
🔎 Similar Papers
No similar papers found.
C
Chentong Wang
Center for Applied Mathematics, Tianjin University, Tianjin, 300072, China
J
Jincheng Gao
Center for Applied Mathematics, Tianjin University, Tianjin, 300072, China
F
Fei Zhu
Center for Applied Mathematics, Tianjin University, Tianjin, 300072, China
Abderrahim Halimi
Abderrahim Halimi
Associate Professor, Heriot-Watt University, Edinburgh
Image processingSignal processingComputational imagingmachine learning
C
Cédric Richard
Université Côte d'Azur, CNRS, OCA, F-06108, Nice, France