🤖 AI Summary
To address poor interpretability of black-box models, over-parameterization in parallel-augmented architectures, and degradation of physical components in physics-informed deep learning, this paper proposes orthogonal projection regularization (OPR). OPR enforces orthogonality between the neural network’s learned residual term and a prior physical model, thereby rigorously preserving physical structure and interpretability. Integrated with a physics-guided parallel-augmented architecture and constraint-aware optimization, the method significantly improves training stability, convergence speed, and parameter identifiability. Evaluated on multiple nonlinear system identification benchmarks, the proposed approach achieves markedly higher predictive accuracy while ensuring that the physical module remains analytically separable and independently verifiable. The core innovation lies in geometric orthogonal constraints that enable decoupled yet synergistic integration of physical priors and data-driven learning—balancing generalizability with model transparency.
📝 Abstract
Deep-learning-based nonlinear system identification has shown the ability to produce reliable and highly accurate models in practice. However, these black-box models lack physical interpretability, and often a considerable part of the learning effort is spent on capturing already expected/known behavior due to first-principles-based understanding of some aspects of the system. A potential solution is to integrate prior physical knowledge directly into the model structure, combining the strengths of physics-based modeling and deep-learning-based identification. The most common approach is to use an additive model augmentation structure, where the physics-based and the machine-learning (ML) components are connected in parallel. However, such models are overparametrized, training them is challenging, potentially causing the physics-based part to lose interpretability. To overcome this challenge, this paper proposes an orthogonal projection-based regularization technique to enhance parameter learning, convergence, and even model accuracy in learning-based augmentation of nonlinear baseline models.