🤖 AI Summary
Neural networks approximating nonlinear dynamical systems in safety-critical applications lack verifiable, global error bounds. Method: We propose an adaptive, parallelized first-order certified modeling framework that formally characterizes the global approximation error of neural networks as a bounded perturbation model—the first such formulation. Our approach integrates interval analysis with local affine robustness verification, while synergistically coupling dynamical systems theory and neural interpretability analysis to ensure theoretical rigor without sacrificing computational scalability. Contribution/Results: Compared to state-of-the-art methods, our framework achieves significant improvements in both verification accuracy and efficiency across multiple benchmarks. It is the first to enable provably correct approximation for emerging tasks—including neural network compression and Koopman autoencoders—thereby establishing the first certified surrogate modeling framework that simultaneously provides strong theoretical guarantees and practical applicability.
📝 Abstract
Neural networks hold great potential to act as approximate models of nonlinear dynamical systems, with the resulting neural approximations enabling verification and control of such systems. However, in safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system. To address this fundamental challenge, we propose a novel, adaptive, and parallelizable verification method based on certified first-order models. Our approach provides formal error bounds on the neural approximations of dynamical systems, allowing them to be safely employed as surrogates by interpreting the error bound as bounded disturbances acting on the approximated dynamics. We demonstrate the effectiveness and scalability of our method on a range of established benchmarks from the literature, showing that it outperforms the state-of-the-art. Furthermore, we highlight the flexibility of our framework by applying it to two novel scenarios not previously explored in this context: neural network compression and an autoencoder-based deep learning architecture for learning Koopman operators, both yielding compelling results.