🤖 AI Summary
Existing reward models struggle to balance efficiency and interpretability: discriminative approaches are computationally efficient but lack reasoning capabilities, while generative methods offer interpretability at the cost of high computational expense. This work proposes CAMEL, a novel framework that leverages the log-probability difference of judgment tokens as a proxy for sample difficulty. By employing a confidence-gated mechanism, CAMEL first performs a lightweight single-token preference judgment and triggers reflective generative reasoning only for low-confidence samples. Integrated with counterfactual prefix-augmented reinforcement learning, CAMEL enables efficient and self-correcting reward modeling. Evaluated on three major benchmarks, it achieves an average accuracy of 82.9%, surpassing the previous state-of-the-art by 3.2% and outperforming 70B-parameter models despite using only 14B parameters, thereby significantly advancing the Pareto frontier of accuracy and efficiency.
📝 Abstract
Reward models play a fundamental role in aligning large language models with human preferences. Existing methods predominantly follow two paradigms: scalar discriminative preference models, which are efficient but lack interpretability, and generative judging models, which offer richer reasoning at the cost of higher computational overhead. We observe that the log-probability margin between verdict tokens strongly correlates with prediction correctness, providing a reliable proxy for instance difficulty without additional inference cost. Building on this insight, we propose CAMEL, a confidence-gated reflection framework that performs a lightweight single-token preference decision first and selectively invokes reflection only for low-confidence instances. To induce effective self-correction, we train the model via reinforcement learning with counterfactual prefix augmentation, which exposes the model to diverse initial verdicts and encourages genuine revision. Empirically, CAMEL achieves state-of-the-art performance on three widely used reward-model benchmarks with 82.9% average accuracy, surpassing the best prior model by 3.2% and outperforming 70B-parameter models using only 14B parameters, while establishing a strictly better accuracy-efficiency Pareto frontier.