๐ค AI Summary
Remote sensing-based crop yield prediction faces challenges including complex spatial patterns, strong spectral heterogeneity, and poor generalization across crops and years. To address these, this paper proposes a dynamic multi-source fusion framework. Methodologically, we introduce a resolution-aware channel attention (RCA) module to model multi-scale spectral responses; design an edge-adaptive operator learning network (AOL-Net) to enhance spatial structural representation; and incorporate a learnable dual-branch fusion mechanism to enable cross-resolution synergistic modeling of MODIS and Sentinel-2 data. Extensive experiments on a real-world, multi-temporal, multi-crop dataset demonstrate that our approach consistently outperforms state-of-the-art methods, achieving superior performance in RMSE, MAE, and Rยฒ. The results validate its strong generalizability across crops and years, robustness to spectral and spatial variability, and practical utility for operational agricultural monitoring.
๐ Abstract
Accurate remote sensing-based crop yield prediction remains a fundamental challenging task due to complex spatial patterns, heterogeneous spectral characteristics, and dynamic agricultural conditions. Existing methods often suffer from limited spatial modeling capacity, weak generalization across crop types and years. To address these challenges, we propose DFYP, a novel Dynamic Fusion framework for crop Yield Prediction, which combines spectral channel attention, edge-adaptive spatial modeling and a learnable fusion mechanism to improve robustness across diverse agricultural scenarios. Specifically, DFYP introduces three key components: (1) a Resolution-aware Channel Attention (RCA) module that enhances spectral representation by adaptively reweighting input channels based on resolution-specific characteristics; (2) an Adaptive Operator Learning Network (AOL-Net) that dynamically selects operators for convolutional kernels to improve edge-sensitive spatial feature extraction under varying crop and temporal conditions; and (3) a dual-branch architecture with a learnable fusion mechanism, which jointly models local spatial details and global contextual information to support cross-resolution and cross-crop generalization. Extensive experiments on multi-year datasets MODIS and multi-crop dataset Sentinel-2 demonstrate that DFYP consistently outperforms current state-of-the-art baselines in RMSE, MAE, and R2 across different spatial resolutions, crop types, and time periods, showcasing its effectiveness and robustness for real-world agricultural monitoring.