Dynamic Memory-enhanced Transformer for Hyperspectral Image Classification

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hyperspectral image (HSI) classification faces challenges in modeling spatial-spectral correlations and suffers from attention redundancy and low computational efficiency in Transformer-based approaches. To address these issues, we propose MemFormer, a lightweight memory-augmented Transformer. First, we introduce a novel dynamic memory-augmented multi-head attention mechanism, where learnable memory units iteratively refine attention weights to enhance fine-grained relational modeling. Second, we design a spatial-spectral positional encoding (SSPE) tailored to HSI characteristics, preserving structural continuity while ensuring computational efficiency. Third, we incorporate a progressive memory enrichment strategy to strengthen feature representation. Extensive experiments on multiple benchmark HSI datasets demonstrate that MemFormer achieves state-of-the-art classification accuracy with significantly fewer parameters and lower computational overhead compared to existing CNN- and Transformer-based methods.

Technology Category

Application Category

📝 Abstract
Hyperspectral image (HSI) classification remains a challenging task due to the intricate spatial-spectral correlations. Existing transformer models excel in capturing long-range dependencies but often suffer from information redundancy and attention inefficiencies, limiting their ability to model fine-grained relationships crucial for HSI classification. To overcome these limitations, this work proposes MemFormer, a lightweight and memory-enhanced transformer. MemFormer introduces a memory-enhanced multi-head attention mechanism that iteratively refines a dynamic memory module, enhancing feature extraction while reducing redundancy across layers. Additionally, a dynamic memory enrichment strategy progressively captures complex spatial and spectral dependencies, leading to more expressive feature representations. To further improve structural consistency, we incorporate a spatial-spectral positional encoding (SSPE) tailored for HSI data, ensuring continuity without the computational burden of convolution-based approaches. Extensive experiments on benchmark datasets demonstrate that MemFormer achieves superior classification accuracy, outperforming state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Addressing information redundancy in HSI transformer models
Enhancing spatial-spectral feature extraction efficiency
Improving classification accuracy with dynamic memory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memory-enhanced multi-head attention mechanism
Dynamic memory enrichment strategy
Spatial-spectral positional encoding (SSPE)
🔎 Similar Papers
No similar papers found.