🤖 AI Summary
To address runtime safety risks—such as out-of-distribution inputs, adversarial attacks, and generalization failures—facing deep neural networks (DNNs) in safety-critical applications like autonomous driving and robotics, this paper presents a systematic survey of real-time, model-agnostic safety monitoring techniques. We propose a unified taxonomy classifying existing methods into three categories: input-layer analysis, internal representation monitoring, and output uncertainty estimation—each explicitly mapped to corresponding threat types. By reviewing mainstream technical paradigms—including statistical anomaly detection, anomaly scoring, and confidence calibration—we delineate performance boundaries and applicability conditions for each approach, identify key limitations in current work, and highlight critical future research directions: scalability, cross-modal robustness, and lightweight deployment. This survey provides both a theoretical framework and practical guidelines for deploying high-reliability AI systems in real-world safety-critical environments.
📝 Abstract
Deep neural networks (DNNs) are widely used in perception systems for safety-critical applications, such as autonomous driving and robotics. However, DNNs remain vulnerable to various safety concerns, including generalization errors, out-of-distribution (OOD) inputs, and adversarial attacks, which can lead to hazardous failures. This survey provides a comprehensive overview of runtime safety monitoring approaches, which operate in parallel to DNNs during inference to detect these safety concerns without modifying the DNN itself. We categorize existing methods into three main groups: Monitoring inputs, internal representations, and outputs. We analyze the state-of-the-art for each category, identify strengths and limitations, and map methods to the safety concerns they address. In addition, we highlight open challenges and future research directions.