π€ AI Summary
This paper presents the first systematic empirical validation of the cognitive anchoring bias in large language models (LLMs)βa systematic deviation in model judgments induced by initial information (βanchorsβ). To investigate this phenomenon, the authors construct SynAnchors, a synthetic benchmark dataset, and introduce a hierarchical attribution analysis framework coupled with an enhanced bias quantification metric. Through multi-model benchmarking and targeted reasoning-path interventions, they demonstrate that anchoring bias is pervasive across LLMs and predominantly manifests in shallow transformer layers; conventional robustness techniques prove ineffective, whereas chain-of-thought reasoning partially mitigates the effect. Key contributions include: (1) establishing the first empirical foundation for anchoring bias in LLMs; (2) proposing a cognitive-bias-aware evaluation paradigm for trustworthy AI; and (3) open-sourcing SynAnchors and its evaluation framework to support reproducible, bias-aware AI research.
π Abstract
The rise of Large Language Models (LLMs) like ChatGPT has advanced natural language processing, yet concerns about cognitive biases are growing. In this paper, we investigate the anchoring effect, a cognitive bias where the mind relies heavily on the first information as anchors to make affected judgments. We explore whether LLMs are affected by anchoring, the underlying mechanisms, and potential mitigation strategies. To facilitate studies at scale on the anchoring effect, we introduce a new dataset, SynAnchors. Combining refined evaluation metrics, we benchmark current widely used LLMs. Our findings show that LLMs' anchoring bias exists commonly with shallow-layer acting and is not eliminated by conventional strategies, while reasoning can offer some mitigation. This recontextualization via cognitive psychology urges that LLM evaluations focus not on standard benchmarks or over-optimized robustness tests, but on cognitive-bias-aware trustworthy evaluation.