Automatically Finding Reward Model Biases

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reward models in large language model post-training are susceptible to biases driven by irrelevant or even harmful features such as response length, formatting artifacts, hallucination, or sycophancy. This work presents the first systematic framework for defining and automatically detecting such biases. It introduces an iterative, large language model–based approach for generating and refining bias hypotheses, coupled with an evolutionary search strategy that replaces conventional best-of-N sampling. The method not only recovers known biases but also uncovers novel ones—including redundant whitespace and hallucinated content—demonstrating its effectiveness on models like Skywork-V2-8B. Synthetic injection experiments further validate its high recall, confirming that the proposed evolutionary iterative search outperforms flat, non-iterative alternatives.

Technology Category

Application Category

📝 Abstract
Reward models are central to large language model (LLM) post-training. However, past work has shown that they can reward spurious or undesirable attributes such as length, format, hallucinations, and sycophancy. In this work, we introduce and study the research problem of automatically finding reward model biases in natural language. We offer a simple approach of using an LLM to iteratively propose and refine candidate biases. Our method can recover known biases and surface novel ones: for example, we found that Skywork-V2-8B, a leading open-weight reward model, often mistakenly favors responses with redundant spacing and responses with hallucinated content. In addition, we show evidence that evolutionary iteration outperforms flat best-of-N search, and we validate the recall of our pipeline using synthetically injected biases. We hope our work contributes to further research on improving RMs through automated interpretability methods.
Problem

Research questions and friction points this paper is trying to address.

reward model bias
large language model
automated interpretability
hallucination
sycophancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

reward model bias
automated interpretability
iterative bias discovery
LLM-based probing
evolutionary search
🔎 Similar Papers
No similar papers found.