🤖 AI Summary
Existing out-of-distribution (OOD) detection methods for learning-based cyber-physical systems lack direct interpretability with respect to safety property violations, limiting their reliability for safety-critical monitoring under OOD conditions.
Method: We propose a robust, STL-based safety monitoring framework that directly predicts violations of signal temporal logic (STL) safety specifications—bypassing OOD detection entirely. Our approach integrates adaptive conformal prediction with incremental learning within a neural trajectory prediction architecture, yielding theoretically grounded confidence guarantees for future safety assessments.
Contribution/Results: The method achieves high recall, real-time performance, and prediction accuracy while providing strict statistical validity. Evaluated on two benchmarks—F1Tenth static obstacle avoidance and multi-dynamic obstacle collision prediction—it significantly outperforms state-of-the-art baselines. Crucially, it maintains both high timeliness and rigorous confidence guarantees even under OOD inputs.
📝 Abstract
The safety of learning-enabled cyber-physical systems is compromised by the well-known vulnerabilities of deep neural networks to out-of-distribution (OOD) inputs. Existing literature has sought to monitor the safety of such systems by detecting OOD data. However, such approaches have limited utility, as the presence of an OOD input does not necessarily imply the violation of a desired safety property. We instead propose to directly monitor safety in a manner that is itself robust to OOD data. To this end, we predict violations of signal temporal logic safety specifications based on predicted future trajectories. Our safety monitor additionally uses a novel combination of adaptive conformal prediction and incremental learning. The former obtains probabilistic prediction guarantees even on OOD data, and the latter prevents overly conservative predictions. We evaluate the efficacy of the proposed approach in two case studies on safety monitoring: 1) predicting collisions of an F1Tenth car with static obstacles, and 2) predicting collisions of a race car with multiple dynamic obstacles. We find that adaptive conformal prediction obtains theoretical guarantees where other uncertainty quantification methods fail to do so. Additionally, combining adaptive conformal prediction and incremental learning for safety monitoring achieves high recall and timeliness while reducing loss in precision. We achieve these results even in OOD settings and outperform alternative methods.