Statistical Confidence in Functional Correctness: An Approach for AI Product Functional Correctness Evaluation

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI systems lack practical and statistically robust methods for evaluating functional correctness due to their inherently probabilistic nature. This work proposes the Statistical Confidence in Functional Correctness (SCFC) framework, which introduces statistical confidence into AI functional evaluation for the first time. By integrating specification limits, stratified probabilistic sampling, bootstrap-based confidence interval estimation, and process capability indices, SCFC bridges business requirements with statistical guarantees, shifting the paradigm from point estimates to verifiable confidence assessments. Case studies on two industrial-scale AI systems demonstrate the method’s strong practicality, usability, and high willingness among domain experts to adopt and deploy it.

Technology Category

Application Category

📝 Abstract
The quality assessment of Artificial Intelligence (AI) systems is a fundamental challenge due to their inherently probabilistic nature. Standards such as ISO/IEC 25059 provide a quality model, but they lack practical and statistically robust methods for assessing functional correctness. This paper proposes and evaluates the Statistical Confidence in Functional Correctness (SCFC) approach, which seeks to fill this gap by connecting business requirements to a measure of statistical confidence that considers both the model's average performance and its variability. The approach consists of four steps: defining quantitative specification limits, performing stratified and probabilistic sampling, applying bootstrapping to estimate a confidence interval for the performance metric, and calculating a capability index as a final indicator. The approach was evaluated through a case study on two real-world AI systems in industry involving interviews with AI experts. Valuable insights were collected from the experts regarding the utility, ease of use, and intention to adopt the methodology in practical scenarios. We conclude that the proposed approach is a feasible and valuable way to operationalize the assessment of functional correctness, moving the evaluation from a point estimate to a statement of statistical confidence.
Problem

Research questions and friction points this paper is trying to address.

functional correctness
AI quality assessment
statistical confidence
ISO/IEC 25059
performance variability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical Confidence
Functional Correctness
Bootstrapping
Capability Index
AI Quality Evaluation
🔎 Similar Papers
No similar papers found.
W
Wallace Albertini
Pontifical Catholic University of Rio de Janeiro
M
Marina Condé Araújo
Pontifical Catholic University of Rio de Janeiro
J
Júlia Condé Araújo
Pontifical Catholic University of Rio de Janeiro
Antonio Pedro Santos Alves
Antonio Pedro Santos Alves
Pontifical Catholic University of Rio de Janeiro (PUC-Rio)
Software Engineering
Marcos Kalinowski
Marcos Kalinowski
Professor, Pontifical Catholic University of Rio de Janeiro (PUC-Rio)
Empirical Software EngineeringAI EngineeringAI4SEHuman Aspects in Software Engineering