🤖 AI Summary
In large language model (LLM) training, paid annotators often lack sufficient quality incentives, and annotation quality remains difficult to measure and control. Method: Grounded in principal-agent theory, this paper proposes a gold-standard question–based incentive mechanism. It introduces two novel criteria for gold questions—high determinacy and format consistency—and establishes, for the first time, that under strategic agent behavior, the hypothesis testing convergence rate is Θ(1/√(n log n)). The approach integrates principal-agent modeling, maximum-likelihood estimation, gold-question selection, and incentive-compatible experimental design. Contribution/Results: Empirical evaluation on human preference data demonstrates that the mechanism more accurately captures annotator behavior than conventional manipulation checks, significantly enhancing the measurability and controllability of annotation quality. It provides a verifiable, generalizable incentive paradigm for constructing high-quality training data.
📝 Abstract
Human-annotated data plays a vital role in training large language models (LLMs), such as supervised fine-tuning and human preference alignment. However, it is not guaranteed that paid human annotators produce high-quality data. In this paper, we study how to incentivize human annotators to do so. We start from a principal-agent model to model the dynamics between the company (the principal) and the annotator (the agent), where the principal can only monitor the annotation quality by examining $n$ samples. We investigate the maximum likelihood estimators (MLE) and the corresponding hypothesis testing to incentivize annotators: the agent is given a bonus if the MLE passes the test. By analyzing the variance of the outcome, we show that the strategic behavior of the agent makes the hypothesis testing very different from traditional ones: Unlike the exponential rate proved by the large deviation theory, the principal-agent model's hypothesis testing rate is of $Theta(1/sqrt{n log n})$. Our theory implies two criteria for the emph{golden questions} to monitor the performance of the annotators: they should be of (1) high certainty and (2) similar format to normal ones. In that light, we select a set of golden questions in human preference data. By doing incentive-compatible experiments, we find out that the annotators' behavior is better revealed by those golden questions, compared to traditional survey techniques such as instructed manipulation checks.