Rich Insights from Cheap Signals: Efficient Evaluations via Tensor Factorization

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current fine-grained evaluation of generative models is hindered by the high cost of human annotations and the misalignment between automatic metrics and human preferences. This work proposes a sample-efficient tensor factorization approach that jointly models prompts, models, and scoring dimensions to learn their latent representations, effectively integrating low-cost automatic scores with a small amount of human annotation. By leveraging a modest calibration set, the method aligns its predictions with human preferences. It substantially reduces reliance on human labeling while achieving more accurate per-prompt prediction of human judgments, enabling the construction of fine-grained performance leaderboards and allowing reliable estimation of model quality using only automatic scores.

Technology Category

Application Category

📝 Abstract
Moving beyond evaluations that collapse performance across heterogeneous prompts toward fine-grained evaluation at the prompt level, or within relatively homogeneous subsets, is necessary to diagnose generative models' strengths and weaknesses. Such fine-grained evaluations, however, suffer from a data bottleneck: human gold-standard labels are too costly at this scale, while automated ratings are often misaligned with human judgment. To resolve this challenge, we propose a novel statistical model based on tensor factorization that merges cheap autorater data with a limited set of human gold-standard labels. Specifically, our approach uses autorater scores to pretrain latent representations of prompts and generative models, and then aligns those pretrained representations to human preferences using a small calibration set. This sample-efficient methodology is robust to autorater quality, more accurately predicts human preferences on a per-prompt basis than standard baselines, and provides tight confidence intervals for key statistical parameters of interest. We also showcase the practical utility of our method by constructing granular leaderboards based on prompt qualities and by estimating model performance solely from autorater scores, eliminating the need for additional human annotations.
Problem

Research questions and friction points this paper is trying to address.

fine-grained evaluation
data bottleneck
human annotation
automated ratings
generative models
Innovation

Methods, ideas, or system contributions that make the work stand out.

tensor factorization
fine-grained evaluation
sample-efficient calibration
autorater alignment
generative model assessment
🔎 Similar Papers
No similar papers found.