🤖 AI Summary
Generative AI evaluation faces dual challenges: high human annotation costs and substantial bias in synthetic labels. This paper proposes a cost-aware active evaluation framework that, under a fixed budget, jointly leverages weak annotators (e.g., AI self-scoring models) and strong annotators (e.g., human experts) to yield unbiased, low-variance estimates of the true strong-annotator mean score. Our key contribution is the first cost-optimal dynamic allocation strategy for weak and strong annotators—integrating principles from active learning, prediction-augmented statistical inference, and optimal resource allocation theory. Experiments demonstrate that, on high-variance tasks, our method achieves evaluation accuracy comparable to full human annotation while reducing total annotation cost by an average of 42%. This establishes a scalable, cost-effective evaluation paradigm for generative AI systems.
📝 Abstract
The development lifecycle of generative AI systems requires continual evaluation, data acquisition, and annotation, which is costly in both resources and time. In practice, rapid iteration often makes it necessary to rely on synthetic annotation data because of the low cost, despite the potential for substantial bias. In this paper, we develop novel, cost-aware methods for actively balancing the use of a cheap, but often inaccurate, weak rater -- such as a model-based autorater that is designed to automatically assess the quality of generated content -- with a more expensive, but also more accurate, strong rater alternative such as a human. More specifically, the goal of our approach is to produce a low variance, unbiased estimate of the mean of the target"strong"rating, subject to some total annotation budget. Building on recent work in active and prediction-powered statistical inference, we derive a family of cost-optimal policies for allocating a given annotation budget between weak and strong raters so as to maximize statistical efficiency. Using synthetic and real-world data, we empirically characterize the conditions under which these policies yield improvements over prior methods. We find that, especially in tasks where there is high variability in the difficulty of examples, our policies can achieve the same estimation precision at a far lower total annotation budget than standard evaluation methods.