🤖 AI Summary
This work addresses the scarcity of self-supervised pretraining methods for hyperspectral regression by proposing a spectral-spatial joint contrastive learning framework. The approach features a model-agnostic design that is compatible with diverse backbone architectures, including 3D convolutional networks and Transformers, and introduces tailored data augmentation strategies specifically designed for hyperspectral data. As the first study to effectively extend contrastive learning to hyperspectral regression, the proposed framework consistently enhances the regression performance of multiple backbone models across both synthetic and real-world datasets, demonstrating its effectiveness and strong generalization capability.
📝 Abstract
Contrastive learning has demonstrated great success in representation learning, especially for image classification tasks. However, there is still a shortage in studies targeting regression tasks, and more specifically applications on hyperspectral data. In this paper, we propose a spectral-spatial contrastive learning framework for regression tasks for hyperspectral data, in a model-agnostic design allowing to enhance backbones such as 3D convolutional and transformer-based networks. Moreover, we provide a collection of transformations relevant for augmenting hyperspectral data. Experiments on synthetic and real datasets show that the proposed framework and transformations significantly improve the performance of all studied backbone models.