🤖 AI Summary
This work proposes a robust runtime monitor learning framework based on interval Hidden Markov Models (iHMMs) to address the challenge of predicting safety violations in autonomous systems. The approach formalizes monitor learning as a problem of model conformance testing and refinement, offering theoretical guarantees of convergence. An efficient iHMM-based risk assessment algorithm is developed to enable online estimation of safety risks. Experimental results demonstrate that the learned monitors significantly outperform model-free methods under challenging conditions such as limited data availability and distributional shifts, exhibiting superior robustness and predictive accuracy.
📝 Abstract
We present a model-based approach to learning robust runtime monitors for autonomous systems. Runtime monitors play a crucial role in raising the level of assurance by observing system behavior and predicting potential safety violations. In our approach, we propose to capture a system's (stochastic) behavior using interval Hidden Markov Models (iHMMs). The monitor then uses this learned iHMM to derive risk estimates for potential safety violations. The paper makes three key contributions: (1) it provides a formalization of the problem of learning robust runtime monitors, (2) introduces a novel framework that uses conformance-testing-based refinement for learning robust iHMMs with convergence guarantees, and (3) presents an efficient monitoring algorithm for computing risk estimates over iHMMs. Our empirical results demonstrate the efficacy of monitors learned using our approach, particularly when compared to model-free monitoring approaches that rely solely on collected data without access to a system model.