🤖 AI Summary
Neural network–based PDE solvers often violate physical conservation laws due to nonlinear constraints and long-range temporal dependencies. To address this, we propose a training-free post-hoc projection framework that enforces predicted solutions to reside within the feasible set defined by the governing equations. Our approach comprises two complementary components: (i) a high-accuracy nonlinear optimization scheme for constraint correction, and (ii) an efficient, stable local linearization projection leveraging Jacobian-vector and vector-Jacobian products. To our knowledge, this is the first systematic study evaluating training-free projection strategies for dynamical PDEs. Experiments on canonical equations—including Burgers’ and KdV—demonstrate substantial reductions in constraint violation, superior accuracy and physical consistency compared to baselines such as PINNs, and effective mitigation of error accumulation over long-time integration.
📝 Abstract
Neural PDE solvers used for scientific simulation often violate governing equation constraints. While linear constraints can be projected cheaply, many constraints are nonlinear, complicating projection onto the feasible set. Dynamical PDEs are especially difficult because constraints induce long-range dependencies in time. In this work, we evaluate two training-free, post hoc projections of approximate solutions: a nonlinear optimization-based projection, and a local linearization-based projection using Jacobian-vector and vector-Jacobian products. We analyze constraints across representative PDEs and find that both projections substantially reduce violations and improve accuracy over physics-informed baselines.