๐ค AI Summary
This work addresses the susceptibility of large language models (LLMs) to cognitive biasesโsuch as conformity or authority cuesโin prompting scenarios when deployed as automated evaluators. To mitigate this, the authors propose Elicitation of Independent Thinking (EIT), a novel training framework that, for the first time, explicitly optimizes for the non-predictiveness of bias-inducing cues. By integrating reinforcement learning with a custom reward mechanism and a balanced conflict-resolution strategy, EIT trains models to actively disregard misleading bias signals during inference, thereby fostering transferable cognitive independence. Experiments on Qwen3-4B demonstrate that EIT substantially enhances both accuracy and robustness under adversarial bias conditions and generalizes effectively to unseen bias types.
๐ Abstract
Large language models (LLMs) increasingly serve as automated judges, yet they remain susceptible to cognitive biases -- often altering their reasoning when faced with spurious prompt-level cues such as consensus claims or authority appeals. Existing mitigations via prompting or supervised fine-tuning fail to generalize, as they modify surface behavior without changing the optimization objective that makes bias cues predictive. To address this gap, we propose Epistemic Independence Training (EIT), a reinforcement learning framework grounded in a key principle: to learn independence, bias cues must be made non-predictive of reward. EIT operationalizes this through a balanced conflict strategy where bias signals are equally likely to support correct and incorrect answers, combined with a reward design that penalizes bias-following without rewarding bias agreement. Experiments on Qwen3-4B demonstrate that EIT improves both accuracy and robustness under adversarial biases, while preserving performance when bias aligns with truth. Notably, models trained only on bandwagon bias generalize to unseen bias types such as authority and distraction, indicating that EIT induces transferable epistemic independence rather than bias-specific heuristics. Code and data are available at https://anonymous.4open.science/r/bias-mitigation-with-rl-BC47.