🤖 AI Summary
This work addresses the prevalent physical inconsistency and ill-posedness in data-driven modeling of reaction-diffusion (RD) systems. We propose a structured regularization framework whose core innovation is the first systematic construction of a parameterized reaction term that intrinsically satisfies mass conservation and quasi-positivity—thereby guaranteeing nonnegative solutions from the modeling outset. This ensures physical plausibility, solution nonnegativity, and well-posedness of the learned PDE. Theoretically, we establish convergence analysis for the physics-constrained learning algorithm, proving that its solutions converge to the unique minimally regularized solution. Methodologically, we provide an explicitly constructible, differentiable approximation scheme for quasi-positive functions. Experiments demonstrate substantial improvements in model interpretability, numerical stability, and cross-dataset generalization reliability of the learned RD models.
📝 Abstract
This paper addresses the problem of learning reaction-diffusion (RD) systems from data while ensuring physical consistency and well-posedness of the learned models. Building on a regularization-based framework for structured model learning, we focus on learning parameterized reaction terms and investigate how to incorporate key physical properties, such as mass conservation and quasipositivity, directly into the learning process. Our main contributions are twofold: First, we propose techniques to systematically modify a given class of parameterized reaction terms such that the resulting terms inherently satisfy mass conservation and quasipositivity, ensuring that the learned RD systems preserve non-negativity and adhere to physical principles. These modifications also guarantee well-posedness of the resulting PDEs under additional regularity and growth conditions. Second, we extend existing theoretical results on regularization-based model learning to RD systems using these physically consistent reaction terms. Specifically, we prove that solutions to the learning problem converge to a unique, regularization-minimizing solution of a limit system even when conservation laws and quasipositivity are enforced. In addition, we provide approximation results for quasipositive functions, essential for constructing physically consistent parameterizations. These results advance the development of interpretable and reliable data-driven models for RD systems that align with fundamental physical laws.