🤖 AI Summary
Current AI systems lack practical and statistically robust methods for evaluating functional correctness due to their inherently probabilistic nature. This work proposes the Statistical Confidence in Functional Correctness (SCFC) framework, which introduces statistical confidence into AI functional evaluation for the first time. By integrating specification limits, stratified probabilistic sampling, bootstrap-based confidence interval estimation, and process capability indices, SCFC bridges business requirements with statistical guarantees, shifting the paradigm from point estimates to verifiable confidence assessments. Case studies on two industrial-scale AI systems demonstrate the method’s strong practicality, usability, and high willingness among domain experts to adopt and deploy it.
📝 Abstract
The quality assessment of Artificial Intelligence (AI) systems is a fundamental challenge due to their inherently probabilistic nature. Standards such as ISO/IEC 25059 provide a quality model, but they lack practical and statistically robust methods for assessing functional correctness. This paper proposes and evaluates the Statistical Confidence in Functional Correctness (SCFC) approach, which seeks to fill this gap by connecting business requirements to a measure of statistical confidence that considers both the model's average performance and its variability. The approach consists of four steps: defining quantitative specification limits, performing stratified and probabilistic sampling, applying bootstrapping to estimate a confidence interval for the performance metric, and calculating a capability index as a final indicator. The approach was evaluated through a case study on two real-world AI systems in industry involving interviews with AI experts. Valuable insights were collected from the experts regarding the utility, ease of use, and intention to adopt the methodology in practical scenarios. We conclude that the proposed approach is a feasible and valuable way to operationalize the assessment of functional correctness, moving the evaluation from a point estimate to a statement of statistical confidence.