On Neural Scaling Laws for Weather Emulation through Continual Training

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic validation of neural scaling laws in the context of weather forecasting, demonstrating both their existence and predictability. Employing a streamlined Swin Transformer architecture, the study adopts a continual training strategy featuring a constant learning rate interspersed with periodic cooling phases, complemented by spectral loss calibration. By constructing IsoFLOP curves, the authors identify computationally optimal training configurations. Experimental results show that this approach substantially outperforms standard cosine learning rate scheduling, achieving consistent improvements in multi-step temporal forecasting accuracy, spectral fidelity, and prediction sharpness. Furthermore, the methodology enables reliable estimation of performance upper bounds for large-scale model scaling, offering a principled foundation for future development of high-capacity weather prediction systems.

Technology Category

Application Category

📝 Abstract
Neural scaling laws, which in some domains can predict the performance of large neural networks as a function of model, data, and compute scale, are the cornerstone of building foundation models in Natural Language Processing and Computer Vision. We study neural scaling in Scientific Machine Learning, focusing on models for weather forecasting. To analyze scaling behavior in as simple a setting as possible, we adopt a minimal, scalable, general-purpose Swin Transformer architecture, and we use continual training with constant learning rates and periodic cooldowns as an efficient training strategy. We show that models trained in this minimalist way follow predictable scaling trends and even outperform standard cosine learning rate schedules. Cooldown phases can be re-purposed to improve downstream performance, e.g., enabling accurate multi-step rollouts over longer forecast horizons as well as sharper predictions through spectral loss adjustments. We also systematically explore a wide range of model and dataset sizes under various compute budgets to construct IsoFLOP curves, and we identify compute-optimal training regimes. Extrapolating these trends to larger scales highlights potential performance limits, demonstrating that neural scaling can serve as an important diagnostic for efficient resource allocation. We open-source our code for reproducibility.
Problem

Research questions and friction points this paper is trying to address.

neural scaling laws
weather emulation
scientific machine learning
compute-optimal training
forecasting
Innovation

Methods, ideas, or system contributions that make the work stand out.

neural scaling laws
continual training
Swin Transformer
IsoFLOP curves
weather emulation
🔎 Similar Papers
No similar papers found.