🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit cognitive biases—particularly the anchoring effect—and assesses their behavioral reliability in price negotiation simulations. We construct LLM-based seller agents and adapt classic anchoring paradigms, employing a dual-dimensional evaluation framework combining objective metrics (e.g., offer deviation, negotiation success rate) and subjective human judgments. Results demonstrate that LLMs are significantly susceptible to anchoring; however, models with stronger chain-of-thought reasoning capabilities exhibit reduced bias, supporting the hypothesis that deeper reasoning mitigates cognitive distortions. No significant correlation is found between personality traits inferred from model outputs and anchoring sensitivity. To our knowledge, this is the first empirical study to validate cognitive bias mechanisms in LLMs within interactive decision-making settings, offering novel cognitive-science–informed evidence for designing trustworthy AI systems.
📝 Abstract
Cognitive biases, well-studied in humans, can also be observed in LLMs, affecting their reliability in real-world applications. This paper investigates the anchoring effect in LLM-driven price negotiations. To this end, we instructed seller LLM agents to apply the anchoring effect and evaluated negotiations using not only an objective metric but also a subjective metric. Experimental results show that LLMs are influenced by the anchoring effect like humans. Additionally, we investigated the relationship between the anchoring effect and factors such as reasoning and personality. It was shown that reasoning models are less prone to the anchoring effect, suggesting that the long chain of thought mitigates the effect. However, we found no significant correlation between personality traits and susceptibility to the anchoring effect. These findings contribute to a deeper understanding of cognitive biases in LLMs and to the realization of safe and responsible application of LLMs in society.