Stochastic and Non-local Closure Modeling for Nonlinear Dynamical Systems via Latent Score-based Generative Models

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional deterministic, local closure models for nonlinear multiscale dynamical systems—such as engineering turbulence—suffer from limited physical priors due to the absence of clear scale separation; while physics-space diffusion models offer stochasticity and non-locality, their prohibitively high inference cost hinders practical deployment. Method: We propose a stochastic, non-local closure paradigm based on score-based generative modeling in latent space. By jointly training a convolutional autoencoder and a conditional diffusion model, we faithfully encode subgrid-scale dynamics into a low-dimensional latent representation, enabling efficient stochastic closure. Contribution/Results: Our approach overcomes the deterministic and locality constraints inherent in classical closures. It reduces sampling computational overhead by multiple-fold compared to physics-space diffusion models, while preserving comparable predictive accuracy. The method enables real-time, stable integration and acceleration within large-scale numerical simulations.

Technology Category

Application Category

📝 Abstract
We propose a latent score-based generative AI framework for learning stochastic, non-local closure models and constitutive laws in nonlinear dynamical systems of computational mechanics. This work addresses a key challenge of modeling complex multiscale dynamical systems without a clear scale separation, for which numerically resolving all scales is prohibitively expensive, e.g., for engineering turbulent flows. While classical closure modeling methods leverage domain knowledge to approximate subgrid-scale phenomena, their deterministic and local assumptions can be too restrictive in regimes lacking a clear scale separation. Recent developments of diffusion-based stochastic models have shown promise in the context of closure modeling, but their prohibitive computational inference cost limits practical applications for many real-world applications. This work addresses this limitation by jointly training convolutional autoencoders with conditional diffusion models in the latent spaces, significantly reducing the dimensionality of the sampling process while preserving essential physical characteristics. Numerical results demonstrate that the joint training approach helps discover a proper latent space that not only guarantees small reconstruction errors but also ensures good performance of the diffusion model in the latent space. When integrated into numerical simulations, the proposed stochastic modeling framework via latent conditional diffusion models achieves significant computational acceleration while maintaining comparable predictive accuracy to standard diffusion models in physical spaces.
Problem

Research questions and friction points this paper is trying to address.

Modeling multiscale systems without clear scale separation
Reducing computational cost of stochastic closure models
Maintaining accuracy while accelerating numerical simulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent score-based generative AI framework
Joint training of autoencoders with diffusion models
Dimensionality reduction while preserving physical characteristics
🔎 Similar Papers
No similar papers found.