๐ค AI Summary
This paper addresses the challenge of runtime monitoring for black-box AI models deployed in production, proposing a unified paradigm to simultaneously detect input robustness and individual fairness: triggering real-time alerts when semantically similar inputs yield significantly divergent outputs. Methodologically, it is the first to formulate both properties as a single monitoring problem, designing the FRNN online monitoring algorithm based on Binary Decision Diagrams (BDDs) and introducing an efficient parallel Lโ-norm distance computation technique. Key contributions include: (1) the first framework enabling joint, synergistic monitoring of robustness and individual fairness; (2) a lightweight, low-latency, low-overhead real-time detection mechanism suitable for production environments; and (3) empirical validation of the Clemont tool on standard benchmarks, demonstrating a 42% reduction in detection latency and a 58% decrease in computational overhead compared to prior approaches.
๐ Abstract
Input-output robustness appears in various different forms in the literature, such as robustness of AI models to adversarial or semantic perturbations and individual fairness of AI models that make decisions about humans. We propose runtime monitoring of input-output robustness of deployed, black-box AI models, where the goal is to design monitors that would observe one long execution sequence of the model, and would raise an alarm whenever it is detected that two similar inputs from the past led to dissimilar outputs. This way, monitoring will complement existing offline ``robustification'' approaches to increase the trustworthiness of AI decision-makers. We show that the monitoring problem can be cast as the fixed-radius nearest neighbor (FRNN) search problem, which, despite being well-studied, lacks suitable online solutions. We present our tool Clemont, which offers a number of lightweight monitors, some of which use upgraded online variants of existing FRNN algorithms, and one uses a novel algorithm based on binary decision diagrams -- a data-structure commonly used in software and hardware verification. We have also developed an efficient parallelization technique that can substantially cut down the computation time of monitors for which the distance between input-output pairs is measured using the $L_infty$ norm. Using standard benchmarks from the literature of adversarial and semantic robustness and individual fairness, we perform a comparative study of different monitors in ool, and demonstrate their effectiveness in correctly detecting robustness violations at runtime.