Incentivizing High-Quality Human Annotations with Golden Questions

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In large language model (LLM) training, paid annotators often lack sufficient quality incentives, and annotation quality remains difficult to measure and control. Method: Grounded in principal-agent theory, this paper proposes a gold-standard question–based incentive mechanism. It introduces two novel criteria for gold questions—high determinacy and format consistency—and establishes, for the first time, that under strategic agent behavior, the hypothesis testing convergence rate is Θ(1/√(n log n)). The approach integrates principal-agent modeling, maximum-likelihood estimation, gold-question selection, and incentive-compatible experimental design. Contribution/Results: Empirical evaluation on human preference data demonstrates that the mechanism more accurately captures annotator behavior than conventional manipulation checks, significantly enhancing the measurability and controllability of annotation quality. It provides a verifiable, generalizable incentive paradigm for constructing high-quality training data.

Technology Category

Application Category

📝 Abstract
Human-annotated data plays a vital role in training large language models (LLMs), such as supervised fine-tuning and human preference alignment. However, it is not guaranteed that paid human annotators produce high-quality data. In this paper, we study how to incentivize human annotators to do so. We start from a principal-agent model to model the dynamics between the company (the principal) and the annotator (the agent), where the principal can only monitor the annotation quality by examining $n$ samples. We investigate the maximum likelihood estimators (MLE) and the corresponding hypothesis testing to incentivize annotators: the agent is given a bonus if the MLE passes the test. By analyzing the variance of the outcome, we show that the strategic behavior of the agent makes the hypothesis testing very different from traditional ones: Unlike the exponential rate proved by the large deviation theory, the principal-agent model's hypothesis testing rate is of $Theta(1/sqrt{n log n})$. Our theory implies two criteria for the emph{golden questions} to monitor the performance of the annotators: they should be of (1) high certainty and (2) similar format to normal ones. In that light, we select a set of golden questions in human preference data. By doing incentive-compatible experiments, we find out that the annotators' behavior is better revealed by those golden questions, compared to traditional survey techniques such as instructed manipulation checks.
Problem

Research questions and friction points this paper is trying to address.

Incentivizing high-quality human annotations for LLMs
Principal-agent model to monitor annotator performance
Golden questions criteria: high certainty, similar format
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses golden questions for quality control
Employs principal-agent model dynamics
Applies MLE and hypothesis testing
🔎 Similar Papers
No similar papers found.
S
Shang Liu
Imperial College Business School, Imperial College London
Z
Zhongze Cai
Imperial College Business School, Imperial College London
Hanzhao Wang
Hanzhao Wang
University of Sydney
Zhongyao Ma
Zhongyao Ma
OpenAI
Xiaocheng Li
Xiaocheng Li
Imperial College Business School, Imperial College London
Machine learningoperations research