🤖 AI Summary
Traditional scaling laws struggle to apply to weather forecasting due to challenges such as autoregressive error accumulation, heterogeneity across multiple physical channels, and inconsistencies between global and local evaluation metrics. This work proposes the first neural scaling law framework tailored for long-horizon roll-out prediction with heterogeneous multi-channel outputs, introducing multidimensional scaling axes encompassing model size, data volume, and computational budget, alongside channel-granular error metrics. The study reveals that while global performance metrics appear favorable, prediction errors exhibit substantial heterogeneity across both channels and forecast lead times, with several channels degrading significantly in longer-range predictions. This highlights a masking effect inherent in global evaluations and provides empirical grounding for designing channel-weighted loss functions and optimizing computational resource allocation.
📝 Abstract
Compute-optimal scaling laws are relatively well studied for NLP and CV, where objectives are typically single-step and targets are comparatively homogeneous. Weather forecasting is harder to characterize in the same framework: autoregressive rollouts compound errors over long horizons, outputs couple many physical channels with disparate scales and predictability, and globally pooled test metrics can disagree sharply with per-channel, late-lead behavior implied by short-horizon training. We extend neural scaling analysis for autoregressive weather forecasting from single-step training loss to long rollouts and per-channel metrics. We quantify (1) how prediction error is distributed across channels and how its growth rate evolves with forecast horizon, (2) if power law scaling holds for test error, relative to rollout length when error is pooled globally, and (3) how that fit varies jointly with horizon and channel for parameter, data, and compute-based scaling axes. We find strong cross-channel and cross-horizon heterogeneity: pooled scaling can look favorable while many channels degrade at late leads. We discuss implications for weighted objectives, horizon-aware curricula, and resource allocation across outputs.