🤖 AI Summary
This study investigates how loss functions affect model stability and physical consistency in physics-informed deep learning, particularly under enforced stress-equilibrium boundary conditions. We employ a Pix2Pix network to predict stress fields in hyperelastic composite materials and systematically compare multiple physics-constrained loss formulations. To rigorously assess training variability, we propose a multi-epoch training-based perturbation analysis framework. Results demonstrate significant differences across loss functions in convergence behavior, prediction accuracy, and satisfaction of physical constraints; reporting single-run outcomes obscures inherent instability. Crucially, this work is the first to explicitly identify, quantify, and emphasize the critical impact of training stochasticity on the reproducibility and reliability of physics-informed models. We advocate adopting statistical metrics from multiple independent training runs as a standard evaluation protocol—establishing a novel robustness assessment paradigm for physics-informed neural networks (PINNs) and related methods.
📝 Abstract
A successful deep learning network is highly dependent not only on the training dataset, but the training algorithm used to condition the network for a given task. The loss function, dataset, and tuning of hyperparameters all play an essential role in training a network, yet there is not much discussion on the reliability or reproducibility of a training algorithm. With the rise in popularity of physics-informed loss functions, this raises the question of how reliable one's loss function is in conditioning a network to enforce a particular boundary condition. Reporting the model variation is needed to assess a loss function's ability to consistently train a network to obey a given boundary condition, and provides a fairer comparison among different methods. In this work, a Pix2Pix network predicting the stress fields of high elastic contrast composites is used as a case study. Several different loss functions enforcing stress equilibrium are implemented, with each displaying different levels of variation in convergence, accuracy, and enforcing stress equilibrium across many training sessions. Suggested practices in reporting model variation are also shared.