🤖 AI Summary
This paper identifies confirmation bias in large language models (LLMs) during chain-of-thought (CoT) reasoning—where internal beliefs distort the question-to-reasoning (Q→R) generation process and impair reasoning-guided answer prediction (QR→A).
Method: Drawing on cognitive science, we decouple CoT into two distinct stages and propose a novel, probability-based quantification of belief strength derived from direct question-answering (QA) likelihoods. We conduct cross-task and cross-model comparisons, correlation analyses, and stage-wise ablation experiments.
Contribution/Results: We demonstrate that CoT effectiveness is jointly determined by belief strength and task vulnerability, explaining performance disparities across tasks. The identified relationship is interpretable and grounded in empirical evidence, providing both theoretical foundations and practical insights for designing debiased prompting strategies. Our framework offers a principled, cognition-informed approach to diagnosing and mitigating reasoning biases in LLMs.
📝 Abstract
Chain-of-thought (CoT) prompting has been widely adopted to enhance the reasoning capabilities of large language models (LLMs). However, the effectiveness of CoT reasoning is inconsistent across tasks with different reasoning types. This work presents a novel perspective to understand CoT behavior through the lens of extit{confirmation bias} in cognitive psychology. Specifically, we examine how model internal beliefs, approximated by direct question-answering probabilities, affect both reasoning generation ($Q o R$) and reasoning-guided answer prediction ($QR o A$) in CoT. By decomposing CoT into a two-stage process, we conduct a thorough correlation analysis in model beliefs, rationale attributes, and stage-wise performance. Our results provide strong evidence of confirmation bias in LLMs, such that model beliefs not only skew the reasoning process but also influence how rationales are utilized for answer prediction. Furthermore, the interplay between task vulnerability to confirmation bias and the strength of beliefs also provides explanations for CoT effectiveness across reasoning tasks and models. Overall, this study provides a valuable insight for the needs of better prompting strategies that mitigate confirmation bias to enhance reasoning performance. Code is available at extit{https://github.com/yuewan2/biasedcot}.