π€ AI Summary
This study addresses the systemic safety risks arising from neglected uncertainty propagation in AI-augmented security pipelines. We formally model the cascading propagation of performance uncertainty from AI subsystems across end-to-end workflowsβa first in this domain. To quantify such propagation, we propose a simulation-based error propagation framework integrating formal methods and rigorous risk analysis, validated through two aviation-domain case studies. Our contributions are threefold: (1) the first uncertainty propagation simulator specifically designed for AI-augmented security pipelines; (2) empirical validation of cross-module error amplification pathways and their safety-critical impact on decision integrity; and (3) actionable, transferable governance strategies and standards-adaptation recommendations to support the design, verification, and regulatory oversight of high-assurance AI systems.
π Abstract
The use of AI technologies is percolating into the secure development of software-based systems, with an increasing trend of composing AI-based subsystems (with uncertain levels of performance) into automated pipelines. This presents a fundamental research challenge and poses a serious threat to safety-critical domains (e.g., aviation). Despite the existing knowledge about uncertainty in risk analysis, no previous work has estimated the uncertainty of AI-augmented systems given the propagation of errors in the pipeline. We provide the formal underpinnings for capturing uncertainty propagation, develop a simulator to quantify uncertainty, and evaluate the simulation of propagating errors with two case studies. We discuss the generalizability of our approach and present policy implications and recommendations for aviation. Future work includes extending the approach and investigating the required metrics for validation in the aviation domain.