Tighnari: Multi-modal Plant Species Prediction Based on Hierarchical Cross-Attention Using Graph-Based and Vision Backbone-Extracted Features

📅 2025-01-05
🏛️ Conference and Labs of the Evaluation Forum
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses spatiotemporal plant community distribution prediction across Europe to enhance species identification accuracy at specific spatiotemporal locations for conservation decision-making. To tackle the challenge of fusing heterogeneous environmental data—namely satellite imagery, climate time series, and land-cover maps—we propose a graph-structure-guided framework integrating feature modeling and prediction calibration. Our method employs a Swin Transformer backbone adapted to temporal cuboids, introduces a novel hierarchical cross-modal attention mechanism for dynamic multimodal alignment, and constructs an ecological proximity–aware graph neural network for feature refinement and prediction correction. Evaluated on 4,716 field survey tasks, the framework achieves significant improvements in prediction accuracy. Ablation studies confirm the efficacy of each component. The approach enables robust multimodal fusion while yielding ecologically interpretable predictions grounded in species co-occurrence and habitat similarity principles.

Technology Category

Application Category

📝 Abstract
Predicting plant species composition in specific spatiotemporal contexts plays an important role in biodiversity management and conservation, as well as in improving species identification tools. Our work utilizes 88,987 plant survey records conducted in specific spatiotemporal contexts across Europe. We also use the corresponding satellite images, time series data, climate time series, and other rasterized environmental data such as land cover, human footprint, bioclimatic, and soil variables as training data to train the model to predict the outcomes of 4,716 plant surveys. We propose a feature construction and result correction method based on the graph structure. Through comparative experiments, we select the best-performing backbone networks for feature extraction in both temporal and image modalities. In this process, we built a backbone network based on the Swin-Transformer Block for extracting temporal Cubes features. We then design a hierarchical cross-attention mechanism capable of robustly fusing features from multiple modalities. During training, we adopt a 10-fold cross-fusion method based on fine-tuning and use a Threshold Top-K method for post-processing. Ablation experiments demonstrate the improvements in model performance brought by our proposed solution pipeline.
Problem

Research questions and friction points this paper is trying to address.

Plant Species Prediction
Conservation Promotion
Plant Identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention Mechanism
Multi-source Data Integration
Time-aware Network Architecture
🔎 Similar Papers
No similar papers found.
Haixu Liu
Haixu Liu
The University of Sydney
Deep Learning Computer Vision LLM
P
Penghao Jiang
The University of Sydney, Camperdown Campus, Sydney, 2006 NSW, Australia
Z
Zerui Tao
The University of Sydney, Camperdown Campus, Sydney, 2006 NSW, Australia
M
Muyan Wan
The University of Sydney, Camperdown Campus, Sydney, 2006 NSW, Australia
Qiuzhuang Sun
Qiuzhuang Sun
University of Sydney
Reliability engineeringIndustrial statisticsMaintenance