🤖 AI Summary
Solving stiff ordinary differential equations (ODEs) in combustion chemical modeling remains challenging due to data sparsity, noise, and numerical stiffness. Method: This paper proposes a physics-informed KAN-ODE surrogate model that integrates kinetic constraints—specifically, information flow principles—and elemental conservation laws directly into the ODE framework of the Kolmogorov–Arnold Network (KAN). This yields a strongly inductive-biased, interpretable, and zero-overfitting architecture with full parameter sharing and extreme structural simplicity (only 344 parameters). Contribution/Results: In hydrogen combustion modeling, the model achieves accuracy comparable to detailed chemical mechanisms while accelerating computation by 2×. It demonstrates robustness to sparse, noisy data and excellent scalability in turbulent combustion simulations. Crucially, it overcomes key limitations of conventional physics-informed neural networks (PINNs)—namely, poor generalizability and parameter redundancy—in stiff chemical systems.
📝 Abstract
Efficient chemical kinetic model inference and application for combustion problems is challenging due to large ODE systems and wideley separated time scales. Machine learning techniques have been proposed to streamline these models, though strong nonlinearity and numerical stiffness combined with noisy data sources makes their application challenging. The recently developed Kolmogorov-Arnold Networks (KANs) and KAN ordinary differential equations (KAN-ODEs) have been demonstrated as powerful tools for scientific applications thanks to their rapid neural scaling, improved interpretability, and smooth activation functions. Here, we develop ChemKANs by augmenting the KAN-ODE framework with physical knowledge of the flow of information through the relevant kinetic and thermodynamic laws, as well as an elemental conservation loss term. This novel framework encodes strong inductive bias that enables streamlined training and higher accuracy predictions, while facilitating parameter sparsity through full sharing of information across all inputs and outputs. In a model inference investigation, we find that ChemKANs exhibit no overfitting or model degradation when tasked with extracting predictive models from data that is both sparse and noisy, a task that a standard DeepONet struggles to accomplish. Next, we find that a remarkably parameter-lean ChemKAN (only 344 parameters) can accurately represent hydrogen combustion chemistry, providing a 2x acceleration over the detailed chemistry in a solver that is generalizable to larger-scale turbulent flow simulations. These demonstrations indicate potential for ChemKANs in combustion physics and chemical kinetics, and demonstrate the scalability of generic KAN-ODEs in significantly larger and more numerically challenging problems than previously studied.