🤖 AI Summary
This work addresses the absence of computable state estimation error bounds in learning-based Kazantzis–Kravaris/Luenberger (KKL) observers by proposing a physics-informed neural network (PINN) approach that jointly learns the KKL transformation and its left inverse mapping. For the first time, an explicit error bound is derived that depends solely on verifiable quantities associated with the trained neural networks. This bound applies to nonlinear systems subject to bounded additive measurement noise and enables formal performance guarantees for the observer over a specified region of operation. Experimental evaluations on multiple nonlinear benchmark systems demonstrate that the derived error bound is both tight and effective, significantly enhancing the reliability and certifiability of state estimates in noisy settings.
📝 Abstract
This paper proposes a computable state-estimation error bound for learning-based Kazantzis--Kravaris/Luenberger (KKL) observers. Recent work learns the KKL transformation map with a physics-informed neural network (PINN) and a corresponding left-inverse map with a conventional neural network. However, no computable state-estimation error bounds are currently available for this approach. We derive a state-estimation error bound that depends only on quantities that can be certified over a prescribed region using neural network verification. We further extend the result to bounded additive measurement noise and demonstrate the guarantees on nonlinear benchmark systems.