🤖 AI Summary
To address the critical challenge of label scarcity in extracting vegetation characteristics from airborne hyperspectral imagery, this paper proposes a vegetation-aware self-supervised representation learning method. We design a contrastive learning framework incorporating spectral consistency constraints and spatial context augmentation to construct a semantically interpretable crown embedding space, wherein learned features implicitly encode key biophysical attributes—such as chlorophyll content and water status—without explicit supervision. Compared with conventional handcrafted hyperspectral indices and supervised baselines, our embeddings achieve an average 12.7% improvement in F1-score across downstream tasks—including disease detection and phenological classification—and attain full-supervision performance using only 10% labeled samples. To our knowledge, this is the first work to explicitly integrate domain-specific vegetation priors into the objective of hyperspectral self-supervised learning, thereby unifying physical interpretability with task efficiency under unsupervised or weakly supervised settings.
📝 Abstract
Aerial remote sensing using multispectral and RGB imagers has provided a critical impetus to precision agriculture. Analysis of the hyperspectral images with limited or no labels is challenging. This paper focuses on self-supervised learning to create neural network embeddings reflecting vegetation properties of trees from aerial hyperspectral images of crop fields. Experimental results demonstrate that a constructed tree representation, using a vegetation property-related embedding space, performs better in downstream machine learning tasks compared to the direct use of hyperspectral vegetation properties as tree representations.