🤖 AI Summary
Prediction-Powered Inference (PPI) for large language model evaluation suffers from high variance and heavy reliance on abundant human annotations—scarce in practice. Method: We propose a posterior regression–enhanced PPI framework that integrates robust regression (Huber and quantile regression) into PPI for low-sample variance suppression; introduces two novel PPI variants that relax conventional dependence on large-scale annotation; and combines bias correction with posterior calibration to ensure unbiased estimation. Contribution/Results: Evaluated on text and image generation assessment tasks, our method achieves comparable accuracy to standard PPI using hundreds of labels—while requiring only 5–20 annotations. It reduces estimation variance by 40%–65%, significantly improving statistical efficiency and reliability under sparse-labeling regimes.
📝 Abstract
Continually evaluating large generative models provides a unique challenge. Often, human annotations are necessary to evaluate high-level properties of these models (e.g. in text or images). However, collecting human annotations of samples can be resource intensive, and using other machine learning systems to provide the annotations, or automatic evaluation, can introduce systematic errors into the evaluation. The Prediction Powered Inference (PPI) framework provides a way of leveraging both the statistical power of automatic evaluation and a small pool of labelled data to produce a low-variance, unbiased estimate of the quantity being evaluated for. However, most work on PPI considers a relatively sizable set of labelled samples, which is not always practical to obtain. To this end, we present two new PPI-based techniques that leverage robust regressors to produce even lower variance estimators in the few-label regime.