🤖 AI Summary
Single human annotations in generative AI evaluation are unreliable due to annotator disagreement and model stochasticity, undermining statistical validity in model comparisons.
Method: We propose the first sample-size determination framework grounded in statistical power analysis, integrating multi-response modeling, confidence interval estimation, McNemar’s test, and empirical distribution fitting to compute the minimum number of annotations per test instance required for statistically robust pairwise model comparisons.
Contribution/Results: Empirical analysis reveals that prevailing benchmarks—typically employing only 3–5 annotations per instance—are systematically underpowered; distinguishing models with similar performance often requires 5–10× more annotations than current practice. Our framework enables retrospective reliability auditing of existing evaluation datasets and prospective annotation budget planning for new benchmarks, thereby establishing a statistically rigorous standard for annotation scale in AI evaluation.
📝 Abstract
Most approaches to machine learning evaluation assume that machine and human responses are repeatable enough to be measured against data with unitary, authoritative,"gold standard"responses, via simple metrics such as accuracy, precision, and recall that assume scores are independent given the test item. However, AI models have multiple sources of stochasticity and the human raters who create gold standards tend to disagree with each other, often in meaningful ways, hence a single output response per input item may not provide enough information. We introduce methods for determining whether an (existing or planned) evaluation dataset has enough responses per item to reliably compare the performance of one model to another. We apply our methods to several of very few extant gold standard test sets with multiple disaggregated responses per item and show that there are usually not enough responses per item to reliably compare the performance of one model against another. Our methods also allow us to estimate the number of responses per item for hypothetical datasets with similar response distributions to the existing datasets we study. When two models are very far apart in their predictive performance, fewer raters are needed to confidently compare them, as expected. However, as the models draw closer, we find that a larger number of raters than are currently typical in annotation collection are needed to ensure that the power analysis correctly reflects the difference in performance.