🤖 AI Summary
This work addresses the challenge of long-term prediction in chaotic dynamical systems, where autoregressive approaches often suffer from error accumulation and distributional shift, leading to physical inconsistency and statistical collapse. To overcome these limitations, the authors propose the MSR-HINE architecture, which innovatively integrates a multi-rate hierarchical implicit recurrent mechanism, an implicit one-step predictor, and multi-scale latent variable injection. By combining coarse-to-fine prior generation with gated posterior fusion, the model preserves fine-scale dynamics while maintaining long-range contextual coherence on slow manifolds. Evaluated on the Kuramoto–Sivashinsky and Lorenz-96 benchmarks, MSR-HINE reduces RMSE by 62.8% and 27.0% respectively compared to a U-Net autoregressive baseline, and significantly extends the predictability horizon—achieving autocorrelation coefficients (ACC) above 0.5 for up to 400 and 100 steps, respectively.
📝 Abstract
Long-horizon autoregressive forecasting of chaotic dynamical systems remains challenging due to rapid error amplification and distribution shift: small one-step inaccuracies compound into physically inconsistent rollouts and collapse of large-scale statistics. We introduce MSR-HINE, a hierarchical implicit forecaster that augments multiscale latent priors with multi-rate recurrent modules operating at distinct temporal scales. At each step, coarse-to-fine recurrent states generate latent priors, an implicit one-step predictor refines the state with multiscale latent injections, and a gated fusion with posterior latents enforces scale-consistent updates; a lightweight hidden-state correction further aligns recurrent memories with fused latents. The resulting architecture maintains long-term context on slow manifolds while preserving fast-scale variability, mitigating error accumulation in chaotic rollouts. Across two canonical benchmarks, MSR-HINE yields substantial gains over a U-Net autoregressive baseline: on Kuramoto-Sivashinsky it reduces end-horizon RMSE by 62.8% at H=400 and improves end-horizon ACC by +0.983 (from -0.155 to 0.828), extending the ACC>= 0.5 predictability horizon from 241 to 400 steps; on Lorenz-96 it reduces RMSE by 27.0% at H=100 and improves end horizon ACC by +0.402 (from 0.144 to 0.545), extending the ACC>= 0.5 horizon from 58 to 100 steps.