Making Bias Non-Predictive: Training Robust LLM Judges via Reinforcement Learning

๐Ÿ“… 2026-02-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the susceptibility of large language models (LLMs) to cognitive biasesโ€”such as conformity or authority cuesโ€”in prompting scenarios when deployed as automated evaluators. To mitigate this, the authors propose Elicitation of Independent Thinking (EIT), a novel training framework that, for the first time, explicitly optimizes for the non-predictiveness of bias-inducing cues. By integrating reinforcement learning with a custom reward mechanism and a balanced conflict-resolution strategy, EIT trains models to actively disregard misleading bias signals during inference, thereby fostering transferable cognitive independence. Experiments on Qwen3-4B demonstrate that EIT substantially enhances both accuracy and robustness under adversarial bias conditions and generalizes effectively to unseen bias types.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) increasingly serve as automated judges, yet they remain susceptible to cognitive biases -- often altering their reasoning when faced with spurious prompt-level cues such as consensus claims or authority appeals. Existing mitigations via prompting or supervised fine-tuning fail to generalize, as they modify surface behavior without changing the optimization objective that makes bias cues predictive. To address this gap, we propose Epistemic Independence Training (EIT), a reinforcement learning framework grounded in a key principle: to learn independence, bias cues must be made non-predictive of reward. EIT operationalizes this through a balanced conflict strategy where bias signals are equally likely to support correct and incorrect answers, combined with a reward design that penalizes bias-following without rewarding bias agreement. Experiments on Qwen3-4B demonstrate that EIT improves both accuracy and robustness under adversarial biases, while preserving performance when bias aligns with truth. Notably, models trained only on bandwagon bias generalize to unseen bias types such as authority and distraction, indicating that EIT induces transferable epistemic independence rather than bias-specific heuristics. Code and data are available at https://anonymous.4open.science/r/bias-mitigation-with-rl-BC47.
Problem

Research questions and friction points this paper is trying to address.

cognitive biases
LLM judges
bias mitigation
epistemic independence
reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Epistemic Independence Training
reinforcement learning
bias mitigation
robust LLM judges
non-predictive bias cues
๐Ÿ”Ž Similar Papers
No similar papers found.