How Many Ratings per Item are Necessary for Reliable Significance Testing?

📅 2024-12-04
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Single human annotations in generative AI evaluation are unreliable due to annotator disagreement and model stochasticity, undermining statistical validity in model comparisons. Method: We propose the first sample-size determination framework grounded in statistical power analysis, integrating multi-response modeling, confidence interval estimation, McNemar’s test, and empirical distribution fitting to compute the minimum number of annotations per test instance required for statistically robust pairwise model comparisons. Contribution/Results: Empirical analysis reveals that prevailing benchmarks—typically employing only 3–5 annotations per instance—are systematically underpowered; distinguishing models with similar performance often requires 5–10× more annotations than current practice. Our framework enables retrospective reliability auditing of existing evaluation datasets and prospective annotation budget planning for new benchmarks, thereby establishing a statistically rigorous standard for annotation scale in AI evaluation.

Technology Category

Application Category

📝 Abstract
Most approaches to machine learning evaluation assume that machine and human responses are repeatable enough to be measured against data with unitary, authoritative,"gold standard"responses, via simple metrics such as accuracy, precision, and recall that assume scores are independent given the test item. However, AI models have multiple sources of stochasticity and the human raters who create gold standards tend to disagree with each other, often in meaningful ways, hence a single output response per input item may not provide enough information. We introduce methods for determining whether an (existing or planned) evaluation dataset has enough responses per item to reliably compare the performance of one model to another. We apply our methods to several of very few extant gold standard test sets with multiple disaggregated responses per item and show that there are usually not enough responses per item to reliably compare the performance of one model against another. Our methods also allow us to estimate the number of responses per item for hypothetical datasets with similar response distributions to the existing datasets we study. When two models are very far apart in their predictive performance, fewer raters are needed to confidently compare them, as expected. However, as the models draw closer, we find that a larger number of raters than are currently typical in annotation collection are needed to ensure that the power analysis correctly reflects the difference in performance.
Problem

Research questions and friction points this paper is trying to address.

Determining sufficient ratings per item for reliable significance testing
Assessing dataset adequacy for null hypothesis statistical testing
Evaluating reliability of metrics with limited human/model responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapted method to assess dataset response sufficiency
Determined 5-10 responses per item insufficient for reliability
Applied analysis to existing gold standard test sets
C
Christopher M. Homan
Department of Computer Science, Rochester Institute of Technology, Rochester, NY 14607
F
Flip Korn
Google Research, New York, NY 10011
Chris Welty
Chris Welty
Google Research
CrowdsourcingApplied OntologySemantic WebNatural Language Processing