🤖 AI Summary
This study addresses the susceptibility of large language models (LLMs) to confirmation bias in rule discovery tasks, where they tend to seek evidence that confirms rather than falsifies hypotheses, thereby impairing their reasoning capabilities. Drawing on established paradigms from human cognitive psychology, the work systematically demonstrates the prevalence of this bias across multiple model families and scales. It further adapts cognitive intervention strategies originally designed for humans—specifically, interactive task framing, counterexample-guided prompting, and behavioral distillation—to effectively recalibrate LLMs’ exploratory behavior. Experimental results show that these interventions increase rule discovery success rates from 42% to 56%, and the distilled models exhibit strong generalization to unseen tasks, such as the Blicket detection paradigm.
📝 Abstract
Confirmation bias, the tendency to seek evidence that supports rather than challenges one's belief, hinders one's reasoning ability. We examine whether large language models (LLMs) exhibit confirmation bias by adapting the rule-discovery study from human psychology: given a sequence of three numbers (a "triple"), an agent engages in an interactive feedback loop where it (1) proposes a new triple, (2) receives feedback on whether it satisfies the hidden rule, and (3) guesses the rule. Across eleven LLMs of multiple families and scales, we find that LLMs exhibit confirmation bias, often proposing triples to confirm their hypothesis rather than trying to falsify it. This leads to slower and less frequent discovery of the hidden rule. We further explore intervention strategies (e.g., encouraging the agent to consider counter examples) developed for humans. We find prompting LLMs with such instruction consistently decreases confirmation bias in LLMs, improving rule discovery rates from 42% to 56% on average. Lastly, we mitigate confirmation bias by distilling intervention-induced behavior into LLMs, showing promising generalization to a new task, the Blicket test. Our work shows that confirmation bias is a limitation of LLMs in hypothesis exploration, and that it can be mitigated via injecting interventions designed for humans.