Runtime Safety Monitoring of Deep Neural Networks for Perception: A Survey

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address runtime safety risks—such as out-of-distribution inputs, adversarial attacks, and generalization failures—facing deep neural networks (DNNs) in safety-critical applications like autonomous driving and robotics, this paper presents a systematic survey of real-time, model-agnostic safety monitoring techniques. We propose a unified taxonomy classifying existing methods into three categories: input-layer analysis, internal representation monitoring, and output uncertainty estimation—each explicitly mapped to corresponding threat types. By reviewing mainstream technical paradigms—including statistical anomaly detection, anomaly scoring, and confidence calibration—we delineate performance boundaries and applicability conditions for each approach, identify key limitations in current work, and highlight critical future research directions: scalability, cross-modal robustness, and lightweight deployment. This survey provides both a theoretical framework and practical guidelines for deploying high-reliability AI systems in real-world safety-critical environments.

Technology Category

Application Category

📝 Abstract
Deep neural networks (DNNs) are widely used in perception systems for safety-critical applications, such as autonomous driving and robotics. However, DNNs remain vulnerable to various safety concerns, including generalization errors, out-of-distribution (OOD) inputs, and adversarial attacks, which can lead to hazardous failures. This survey provides a comprehensive overview of runtime safety monitoring approaches, which operate in parallel to DNNs during inference to detect these safety concerns without modifying the DNN itself. We categorize existing methods into three main groups: Monitoring inputs, internal representations, and outputs. We analyze the state-of-the-art for each category, identify strengths and limitations, and map methods to the safety concerns they address. In addition, we highlight open challenges and future research directions.
Problem

Research questions and friction points this paper is trying to address.

Monitoring DNN safety risks during runtime without model modification
Detecting generalization errors, OOD inputs, and adversarial attacks
Categorizing methods by input, internal, and output monitoring approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Runtime monitoring operates parallel to DNNs
Methods monitor inputs, representations, and outputs
Detects safety concerns without modifying DNNs
🔎 Similar Papers
No similar papers found.
A
Albert Schotschneider
Department of Technical Cognitive Systems, FZI Research Center for Information Technology, Karlsruhe, Germany
S
Svetlana Pavlitska
Department of Technical Cognitive Systems, FZI Research Center for Information Technology, Karlsruhe, Germany; Institute for Applied Informatics and Formal Description Methods, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
J. Marius Zöllner
J. Marius Zöllner
Professor at Karlsruhe Institute of Technology (KIT), Director at Forschungszentrum Informatik (FZI)
Intelligent VehiclesAutonomous DrivingRoboticsArtificial IntelligenceMachine Learning