Online Rubrics Elicitation from Pairwise Comparisons

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Static reward modeling based on fixed specifications is prone to reward hacking and struggles to adapt to emergent requirements during training. To address this, we propose a dynamic evaluation criterion construction method within a reinforcement learning framework: it automatically distills multidimensional assessment criteria—such as logical coherence, transparency, practical utility, and structured expression—via pairwise comparisons of policy responses and online clustering, enabling continuous evolution of evaluation dimensions. This approach overcomes the limitations of static scoring, effectively mitigates reward gaming, and enhances adaptability to newly emerging desirable properties. Experiments on AlpacaEval, GPQA, ArenaHard, and an expert-annotated validation set demonstrate that our method achieves up to an 8% improvement in overall performance over static specification baselines.

Technology Category

Application Category

📝 Abstract
Rubrics provide a flexible way to train LLMs on open-ended long-form answers where verifiable rewards are not applicable and human preferences provide coarse signals. Prior work shows that reinforcement learning with rubric-based rewards leads to consistent gains in LLM post-training. Most existing approaches rely on rubrics that remain static over the course of training. Such static rubrics, however, are vulnerable to reward-hacking type behaviors and fail to capture emergent desiderata that arise during training. We introduce Online Rubrics Elicitation (OnlineRubrics), a method that dynamically curates evaluation criteria in an online manner through pairwise comparisons of responses from current and reference policies. This online process enables continuous identification and mitigation of errors as training proceeds. Empirically, this approach yields consistent improvements of up to 8% over training exclusively with static rubrics across AlpacaEval, GPQA, ArenaHard as well as the validation sets of expert questions and rubrics. We qualitatively analyze the elicited criteria and identify prominent themes such as transparency, practicality, organization, and reasoning.
Problem

Research questions and friction points this paper is trying to address.

Dynamic rubrics elicit evaluation criteria through pairwise comparisons
Mitigate reward hacking by updating rubrics during LLM training
Improve performance on open-ended tasks where rewards are coarse
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic rubric elicitation through pairwise comparisons
Online curation of evaluation criteria during training
Continuous error identification and mitigation process
🔎 Similar Papers
No similar papers found.