🤖 AI Summary
This work addresses the challenges of long-range weather forecasting—namely catastrophic forgetting, error accumulation, and high computational cost—by introducing the EMFormer architecture. EMFormer efficiently captures multi-scale features through a single convolutional operation and integrates a cumulative context fine-tuning strategy with a dynamic sinusoidal-weighted composite loss function. This design enables end-to-end long-sequence modeling that balances short-term accuracy and long-term consistency. The proposed method significantly improves prediction accuracy for both extended forecasts and extreme weather events, achieves a 5.69× speedup in inference over conventional multi-scale modules, and demonstrates strong generalization capabilities on ImageNet-1K and ADE20K benchmarks.
📝 Abstract
Long-term weather forecasting is critical for socioeconomic planning and disaster preparedness. While recent approaches employ finetuning to extend prediction horizons, they remain constrained by the issues of catastrophic forgetting, error accumulation, and high training overhead. To address these limitations, we present a novel pipeline across pretraining, finetuning and forecasting to enhance long-context modeling while reducing computational overhead. First, we introduce an Efficient Multi-scale Transformer (EMFormer) to extract multi-scale features through a single convolution in both training and inference. Based on the new architecture, we further employ an accumulative context finetuning to improve temporal consistency without degrading short-term accuracy. Additionally, we propose a composite loss that dynamically balances different terms via a sinusoidal weighting, thereby adaptively guiding the optimization trajectory throughout pretraining and finetuning. Experiments show that our approach achieves strong performance in weather forecasting and extreme event prediction, substantially improving long-term forecast accuracy. Moreover, EMFormer demonstrates strong generalization on vision benchmarks (ImageNet-1K and ADE20K) while delivering a 5.69x speedup over conventional multi-scale modules.