Latent Generative Solvers for Generalizable Long-Term Physics Simulation

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor generalization and trajectory drift commonly observed in long-term simulations of heterogeneous partial differential equation (PDE) systems. The authors propose a two-stage generative modeling framework: first, a pretrained variational autoencoder (VAE) maps diverse PDE states into a shared latent space; then, a flow-matching-based Transformer learns probabilistic latent dynamics. Key innovations include an uncertainty-aware modulation mechanism and a flow-enforced contextual update strategy, which together significantly enhance long-term stability and generalization. This approach uniquely integrates latent-space generative modeling, uncertainty-driven perturbations, and flow-enforced dynamics, achieving strong out-of-distribution generalization—demonstrated on benchmarks such as Kolmogorov flow—while reducing inference FLOPs by up to 70× and effectively mitigating long-horizon trajectory drift.

Technology Category

Application Category

📝 Abstract
We study long-horizon surrogate simulation across heterogeneous PDE systems. We introduce Latent Generative Solvers (LGS), a two-stage framework that (i) maps diverse PDE states into a shared latent physics space with a pretrained VAE, and (ii) learns probabilistic latent dynamics with a Transformer trained by flow matching. Our key mechanism is an uncertainty knob that perturbs latent inputs during training and inference, teaching the solver to correct off-manifold rollout drift and stabilizing autoregressive prediction. We further use flow forcing to update a system descriptor (context) from model-generated trajectories, aligning train/test conditioning and improving long-term stability. We pretrain on a curated corpus of $\sim$2.5M trajectories at $128^2$ resolution spanning 12 PDE families. LGS matches strong deterministic neural-operator baselines on short horizons while substantially reducing rollout drift on long horizons. Learning in latent space plus efficient architectural choices yields up to \textbf{70$\times$} lower FLOPs than non-generative baselines, enabling scalable pretraining. We also show efficient adaptation to an out-of-distribution $256^2$ Kolmogorov flow dataset under limited finetuning budgets. Overall, LGS provides a practical route toward generalizable, uncertainty-aware neural PDE solvers that are more reliable for long-term forecasting and downstream scientific workflows.
Problem

Research questions and friction points this paper is trying to address.

long-term physics simulation
generalizable PDE solvers
rollout drift
heterogeneous PDE systems
uncertainty-aware simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Generative Solvers
flow matching
uncertainty knob
flow forcing
neural PDE solvers
🔎 Similar Papers
2024-02-19Neural Information Processing SystemsCitations: 8