PRDP: Progressively Refined Differentiable Physics

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prohibitive computational cost of differentiable physics solvers in neural network training, caused by excessive iterative steps. We propose a progressive refinement strategy: employing coarse-grained physical solving during early training stages—first theoretically proving its sufficiency for full-precision model convergence—and dynamically controlling refinement via an adaptive termination mechanism that preserves accuracy while substantially reducing cost. Methodologically, we integrate bilevel optimization with implicit and unrolled differentiation to enable differentiable, fine-grained control over iterative linear solvers for sparse discrete differential operators. Evaluated on neural surrogate modeling of the Navier–Stokes equations, our approach reduces training time by 62% while maintaining identical inversion accuracy and generalization performance.

Technology Category

Application Category

📝 Abstract
The physics solvers employed for neural network training are primarily iterative, and hence, differentiating through them introduces a severe computational burden as iterations grow large. Inspired by works in bilevel optimization, we show that full accuracy of the network is achievable through physics significantly coarser than fully converged solvers. We propose Progressively Refined Differentiable Physics (PRDP), an approach that identifies the level of physics refinement sufficient for full training accuracy. By beginning with coarse physics, adaptively refining it during training, and stopping refinement at the level adequate for training, it enables significant compute savings without sacrificing network accuracy. Our focus is on differentiating iterative linear solvers for sparsely discretized differential operators, which are fundamental to scientific computing. PRDP is applicable to both unrolled and implicit differentiation. We validate its performance on a variety of learning scenarios involving differentiable physics solvers such as inverse problems, autoregressive neural emulators, and correction-based neural-hybrid solvers. In the challenging example of emulating the Navier-Stokes equations, we reduce training time by 62%.
Problem

Research questions and friction points this paper is trying to address.

Reduce computational burden in neural network training
Achieve full accuracy with coarser physics solvers
Save compute resources without sacrificing network accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coarse physics for training
Adaptive refinement technique
Reduced computational burden
🔎 Similar Papers
No similar papers found.