Dual Classification Head Self-training Network for Cross-scene Hyperspectral Image Classification

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address significant spectral shifts, large intra-class feature distribution discrepancies, and severe label scarcity in cross-scene hyperspectral image classification, this paper proposes the Dual-Head Self-Training Network (DHSNet). DHSNet introduces, for the first time in this task, a dual-branch self-training framework that jointly optimizes a class-level feature alignment loss and a center-aware feature attention module, effectively mitigating error accumulation in pseudo-labels and enhancing scene-invariant feature representation. Additionally, an adversarial domain alignment strategy is incorporated to further reduce inter-domain distribution divergence. Evaluated on three standard cross-scene benchmarks, DHSNet achieves an average overall accuracy improvement of 3.2–5.8 percentage points over state-of-the-art methods. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Due to the difficulty of obtaining labeled data for hyperspectral images (HSIs), cross-scene classification has emerged as a widely adopted approach in the remote sensing community. It involves training a model using labeled data from a source domain (SD) and unlabeled data from a target domain (TD), followed by inferencing on the TD. However, variations in the reflectance spectrum of the same object between the SD and the TD, as well as differences in the feature distribution of the same land cover class, pose significant challenges to the performance of cross-scene classification. To address this issue, we propose a dual classification head self-training network (DHSNet). This method aligns class-wise features across domains, ensuring that the trained classifier can accurately classify TD data of different classes. We introduce a dual classification head self-training strategy for the first time in the cross-scene HSI classification field. The proposed approach mitigates domain gap while preventing the accumulation of incorrect pseudo-labels in the model. Additionally, we incorporate a novel central feature attention mechanism to enhance the model's capacity to learn scene-invariant features across domains. Experimental results on three cross-scene HSI datasets demonstrate that the proposed DHSNET significantly outperforms other state-of-the-art approaches. The code for DHSNet will be available at https://github.com/liurongwhm.
Problem

Research questions and friction points this paper is trying to address.

Cross-scene hyperspectral image classification
Domain gap in feature distribution
Accumulation of incorrect pseudo-labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual classification head self-training
Central feature attention mechanism
Cross-scene HSI classification enhancement
🔎 Similar Papers
No similar papers found.
Rong Liu
Rong Liu
University of Southern California
Computer VisionMachine LearningComputer Graphics
J
Junye Liang
School of Geography and Planning, Sun Yat-Sen University, Guangzhou 510275, China
J
Jiaqi Yang
Department of Forest and Wildlife Ecology, University of Wisconsin-Madison, 1630 Linden Dr., Madison, WI 53706, USA
J
Jiang He
Chair of Data Science in Earth Observation, Technical University of Munich, Munich, 80333, Germany
Peng Zhu
Peng Zhu
Anhui Medical University
prenatal stressnutritionvitamin Dbirth outcomesneurobehavioral development