Towards Adversarial Training under Hyperspectral Images

πŸ“… 2025-10-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Hyperspectral image classification models are highly vulnerable to adversarial attacks due to their spectral semantic sensitivity, rendering existing structured defense methods poorly scalable and ineffective against strong attacks. To address this, we introduce adversarial training to the hyperspectral domain for the first time, proposing AT-RAβ€”a novel training paradigm that explicitly preserves spectral semantic integrity and spatial continuity during data augmentation via spectral consistency constraints and spatial smoothing regularization. This approach jointly enhances spectral diversity and spatial smoothness. Extensive experiments demonstrate that AT-RA significantly improves model robustness across multiple architectures and datasets: it boosts classification accuracy by 21.34% under AutoAttack and by 18.78% under PGD-50, while simultaneously increasing clean (benign) accuracy by 2.68%.

Technology Category

Application Category

πŸ“ Abstract
Recent studies have revealed that hyperspectral classification models based on deep learning are highly vulnerable to adversarial attacks, which pose significant security risks. Although several approaches have attempted to enhance adversarial robustness by modifying network architectures, these methods often rely on customized designs that limit scalability and fail to defend effectively against strong attacks. To address these challenges, we introduce adversarial training to the hyperspectral domain, which is widely regarded as one of the most effective defenses against adversarial attacks. Through extensive empirical analyses, we demonstrate that while adversarial training does enhance robustness across various models and datasets, hyperspectral data introduces unique challenges not seen in RGB images. Specifically, we find that adversarial noise and the non-smooth nature of adversarial examples can distort or eliminate important spectral semantic information. To mitigate this issue, we employ data augmentation techniques and propose a novel hyperspectral adversarial training method, termed AT-RA. By increasing the diversity of spectral information and ensuring spatial smoothness, AT-RA preserves and corrects spectral semantics in hyperspectral images. Experimental results show that AT-RA improves adversarial robustness by 21.34% against AutoAttack and 18.78% against PGD-50 while boosting benign accuracy by 2.68%.
Problem

Research questions and friction points this paper is trying to address.

Enhancing adversarial robustness of hyperspectral classification models
Addressing spectral semantic distortion from adversarial attacks
Improving defense scalability against strong adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing adversarial training to hyperspectral domain
Proposing AT-RA method with spectral augmentation
Ensuring spatial smoothness to preserve spectral semantics
πŸ”Ž Similar Papers
No similar papers found.