🤖 AI Summary
To address annotation noise in reward modeling caused by reliance on LLM-generated preference labels, this paper proposes RMBoost—a two-stage, conditionally guided synthetic data generation framework that *pre-specifies* high-quality preference labels (e.g., “better/worse”) before response generation. Instead of the conventional post-hoc labeling paradigm, RMBoost leverages multi-dimensional quality attributes (e.g., helpfulness, relevance) to steer an LLM in generating paired responses aligned with the pre-defined preferences, thereby constructing high-fidelity synthetic preference data. This design mitigates noise at its source. Furthermore, RMBoost integrates systematic prompt engineering with joint training of the reward model to enhance generalization. Evaluated on three benchmark datasets, RMBoost consistently outperforms existing synthetic-data methods, delivering significant and stable improvements in ranking accuracy and cross-domain generalization across four mainstream reward model architectures.
📝 Abstract
Reward models (RMs) are crucial for aligning large language models (LLMs) with human preferences. They are trained using preference datasets where each example consists of one input prompt, two responses, and a preference label. As curating a high-quality human labeled preference dataset is both time-consuming and expensive, people often rely on existing powerful LLMs for preference label generation. This can potentially introduce noise and impede RM training. In this work, we present RMBoost, a novel synthetic preference data generation paradigm to boost reward model quality. Unlike traditional methods, which generate two responses before obtaining the preference label, RMBoost first generates one response and selects a preference label, followed by generating the second more (or less) preferred response conditioned on the pre-selected preference label and the first response. This approach offers two main advantages. First, RMBoost reduces labeling noise since preference pairs are constructed intentionally. Second, RMBoost facilitates the creation of more diverse responses by incorporating various quality aspects (e.g., helpfulness, relevance, completeness) into the prompts. We conduct extensive experiments across three diverse datasets and demonstrate that RMBoost outperforms other synthetic preference data generation techniques and significantly boosts the performance of four distinct reward models.