🤖 AI Summary
Existing PDE foundation models suffer from limited pretraining data scale, low autoregressive rollout accuracy, poor out-of-distribution (OOD) generalization, and high computational and data requirements.
Method: We propose a novel test-time computation paradigm—first introducing the “chain-of-thought” mechanism from large language models into PDE modeling—and design a reinforcement learning–driven inference framework guided by a dual-type reward model (spatiotemporal consistency and physical fidelity). During inference, it dynamically allocates compute resources and performs adaptive rollout via stochastic-process-guided heuristic search, without increasing training cost.
Contribution/Results: Evaluated on the PDEGym compressible Euler equations benchmark, our method significantly outperforms standard non-adaptive baselines, achieving superior OOD generalization and higher computational efficiency while improving rollout stability and accuracy.
📝 Abstract
Partial Differential Equations (PDEs) are the bedrock for modern computational sciences and engineering, and inherently computationally expensive. While PDE foundation models have shown much promise for simulating such complex spatio-temporal phenomena, existing models remain constrained by the pretraining datasets and struggle with auto-regressive rollout performance, especially in out-of-distribution (OOD) cases. Furthermore, they have significant compute and training data requirements which hamper their use in many critical applications. Inspired by recent advances in ``thinking" strategies used in large language models (LLMs), we introduce the first test-time computing (TTC) strategy for PDEs that utilizes computational resources during inference to achieve more accurate predictions with fewer training samples and smaller models. We accomplish this with two types of reward models that evaluate predictions of a stochastic based model for spatio-temporal consistency. We demonstrate this method on compressible Euler-equation simulations from the PDEGym benchmark and show that TTC captures improved predictions relative to standard non-adaptive auto-regressive inference. This TTC framework marks a foundational step towards more advanced reasoning algorithms or PDE modeling, inluding building reinforcement-learning-based approaches, potentially transforming computational workflows in physics and engineering.