🤖 AI Summary
This study addresses spatiotemporal plant community distribution prediction across Europe to enhance species identification accuracy at specific spatiotemporal locations for conservation decision-making. To tackle the challenge of fusing heterogeneous environmental data—namely satellite imagery, climate time series, and land-cover maps—we propose a graph-structure-guided framework integrating feature modeling and prediction calibration. Our method employs a Swin Transformer backbone adapted to temporal cuboids, introduces a novel hierarchical cross-modal attention mechanism for dynamic multimodal alignment, and constructs an ecological proximity–aware graph neural network for feature refinement and prediction correction. Evaluated on 4,716 field survey tasks, the framework achieves significant improvements in prediction accuracy. Ablation studies confirm the efficacy of each component. The approach enables robust multimodal fusion while yielding ecologically interpretable predictions grounded in species co-occurrence and habitat similarity principles.
📝 Abstract
Predicting plant species composition in specific spatiotemporal contexts plays an important role in biodiversity management and conservation, as well as in improving species identification tools. Our work utilizes 88,987 plant survey records conducted in specific spatiotemporal contexts across Europe. We also use the corresponding satellite images, time series data, climate time series, and other rasterized environmental data such as land cover, human footprint, bioclimatic, and soil variables as training data to train the model to predict the outcomes of 4,716 plant surveys. We propose a feature construction and result correction method based on the graph structure. Through comparative experiments, we select the best-performing backbone networks for feature extraction in both temporal and image modalities. In this process, we built a backbone network based on the Swin-Transformer Block for extracting temporal Cubes features. We then design a hierarchical cross-attention mechanism capable of robustly fusing features from multiple modalities. During training, we adopt a 10-fold cross-fusion method based on fine-tuning and use a Threshold Top-K method for post-processing. Ablation experiments demonstrate the improvements in model performance brought by our proposed solution pipeline.