🤖 AI Summary
This study addresses the challenge of robustly discovering constitutive closure relations for nonlinear reaction–diffusion systems from noisy spatiotemporal observational data. The authors propose a three-stage neuro-symbolic framework: first, a noise-resilient neural surrogate is constructed using a weak-formulation approach augmented with physical constraints; second, this surrogate is distilled into an interpretable symbolic expression; and third, the resulting closure model is validated via forward re-simulation under unseen initial conditions. The work uncovers a “bias inheritance” mechanism, wherein nearly all symbolic closure error originates from the initial neural surrogate, indicating that the primary modeling bottleneck lies in the numerical inverse problem rather than symbolic compression. Experiments show that when the function class matches the ground truth, classical basis functions perform exceptionally well; under mismatched conditions, the neural surrogate can still be compressed into compact symbolic laws with minimal degradation in rolling forecasts, though the bias inheritance ratio approaches unity.
📝 Abstract
We investigate the data-driven discovery of constitutive closures in nonlinear reaction-diffusion systems with known governing PDE structures. Our objective is to robustly recover diffusion and reaction laws from spatiotemporal observations while avoiding the common pitfall where low residuals or short-horizon predictions are conflated with physical recovery. We propose a three-stage neural-symbolic framework: (1) learning numerical surrogates under physical constraints using a noise-robust weak-form-driven objective; (2) compressing these surrogates into restricted interpretable symbolic families (e.g., polynomial, rational, and saturation forms); and (3) validating the symbolic closures through explicit forward re-simulation on unseen initial conditions. Extensive numerical experiments reveal two distinct regimes. Under matched-library settings, weak polynomial baselines behave as correctly specified reference estimators, showing that neural surrogates do not uniformly outperform classical bases. Conversely, under function-class mismatch, neural surrogates provide necessary flexibility and can be compressed into compact symbolic laws with minimal rollout degradation. However, we identify a critical "bias inheritance" mechanism where symbolic compression does not automatically repair constitutive bias. Across various observation regimes, the true error of the symbolic closure closely tracks that of the neural surrogate, yielding a bias inheritance ratio near one. These findings demonstrate that the primary bottleneck in neural-symbolic modeling lies in the initial numerical inverse problem rather than the subsequent symbolic compression. We underscore that constitutive claims must be rigorously supported by forward validation rather than residual minimization alone.