🤖 AI Summary
To address the scarcity of labeled samples and poor cross-sensor generalization in hyperspectral image (HSI) classification, this paper proposes a lightweight, non-intrusive cross-modal domain adaptation framework. Methodologically, it introduces: (1) Attention-Gated Tuning (AGT), enabling selective knowledge transfer from source domains (e.g., RGB or heterogeneous HSI); (2) a spectral-spatial decoupled Tri-Former architecture, facilitating efficient feature alignment and interference suppression; and (3) a triplet-style heterogeneous fine-tuning scheme. Evaluated on three HSI datasets acquired by distinct sensors, the method consistently outperforms state-of-the-art approaches across homogeneous, heterogeneous, and cross-modal (RGB→HSI) settings, demonstrating strong robustness and generalizability. The implementation is publicly available.
📝 Abstract
Data-hungry HSI classification methods require high-quality labeled HSIs, which are often costly to obtain. This characteristic limits the performance potential of data-driven methods when dealing with limited annotated samples. Bridging the domain gap between data acquired from different sensors allows us to utilize abundant labeled data across sensors to break this bottleneck. In this paper, we propose a novel Attention-Gated Tuning (AGT) strategy and a triplet-structured transformer model, Tri-Former, to address this issue. The AGT strategy serves as a bridge, allowing us to leverage existing labeled HSI datasets, even RGB datasets to enhance the performance on new HSI datasets with limited samples. Instead of inserting additional parameters inside the basic model, we train a lightweight auxiliary branch that takes intermediate features as input from the basic model and makes predictions. The proposed AGT resolves conflicts between heterogeneous and even cross-modal data by suppressing the disturbing information and enhances the useful information through a soft gate. Additionally, we introduce Tri-Former, a triplet-structured transformer with a spectral-spatial separation design that enhances parameter utilization and computational efficiency, enabling easier and flexible fine-tuning. Comparison experiments conducted on three representative HSI datasets captured by different sensors demonstrate the proposed Tri-Former achieves better performance compared to several state-of-the-art methods. Homologous, heterologous and cross-modal tuning experiments verified the effectiveness of the proposed AGT. Code has been released at: href{https://github.com/Cecilia-xue/AGT}{https://github.com/Cecilia-xue/AGT}.