Auto-Evaluation with Few Labels through Post-hoc Regression

📅 2024-11-19
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Prediction-Powered Inference (PPI) for large language model evaluation suffers from high variance and heavy reliance on abundant human annotations—scarce in practice. Method: We propose a posterior regression–enhanced PPI framework that integrates robust regression (Huber and quantile regression) into PPI for low-sample variance suppression; introduces two novel PPI variants that relax conventional dependence on large-scale annotation; and combines bias correction with posterior calibration to ensure unbiased estimation. Contribution/Results: Evaluated on text and image generation assessment tasks, our method achieves comparable accuracy to standard PPI using hundreds of labels—while requiring only 5–20 annotations. It reduces estimation variance by 40%–65%, significantly improving statistical efficiency and reliability under sparse-labeling regimes.

Technology Category

Application Category

📝 Abstract
Continually evaluating large generative models provides a unique challenge. Often, human annotations are necessary to evaluate high-level properties of these models (e.g. in text or images). However, collecting human annotations of samples can be resource intensive, and using other machine learning systems to provide the annotations, or automatic evaluation, can introduce systematic errors into the evaluation. The Prediction Powered Inference (PPI) framework provides a way of leveraging both the statistical power of automatic evaluation and a small pool of labelled data to produce a low-variance, unbiased estimate of the quantity being evaluated for. However, most work on PPI considers a relatively sizable set of labelled samples, which is not always practical to obtain. To this end, we present two new PPI-based techniques that leverage robust regressors to produce even lower variance estimators in the few-label regime.
Problem

Research questions and friction points this paper is trying to address.

Addresses high variance in PPI++ with few labels
Analyzes PPI++ via regression for small sample sizes
Proposes robust regressors to improve few-label inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses robust regressors for low-variance estimators
Analyzes PPI++ via ordinary least squares regression
Improves inference with scarce labeled data
🔎 Similar Papers
No similar papers found.
B
Benjamin Eyre
Columbia University
David Madras
David Madras
University of Toronto
Machine Learning