Large Language Models are overconfident and amplify human bias

๐Ÿ“… 2025-05-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study investigates whether large language models (LLMs) inherit and amplify human overconfidence bias. Method: We construct an automated reasoning benchmark with ground-truth answers, employ confidence-aware prompting, evaluate calibration via Expected Calibration Error (ECE), and conduct controlled human-subject experiments to systematically quantify confidence calibration across five state-of-the-art LLMs. Contribution/Results: We first observe that LLM overconfidence deteriorates sharply as predicted confidence decreases. Although LLM outputs improve human accuracy, they increase human overconfidence by over 100%, revealing a novel cognitive risk in human-AI collaboration. Results show pervasive miscalibration in LLMsโ€”20%โ€“60% ECEโ€”whereas humans, despite comparable accuracy to advanced LLMs, exhibit significantly superior calibration. These findings provide critical empirical evidence for trustworthy LLM reasoning and safe human-AI collaboration.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) are revolutionizing every aspect of society. They are increasingly used in problem-solving tasks to substitute human assessment and reasoning. LLMs are trained on what humans write and thus prone to learn human biases. One of the most widespread human biases is overconfidence. We examine whether LLMs inherit this bias. We automatically construct reasoning problems with known ground truths, and prompt LLMs to assess the confidence in their answers, closely following similar protocols in human experiments. We find that all five LLMs we study are overconfident: they overestimate the probability that their answer is correct between 20% and 60%. Humans have accuracy similar to the more advanced LLMs, but far lower overconfidence. Although humans and LLMs are similarly biased in questions which they are certain they answered correctly, a key difference emerges between them: LLM bias increases sharply relative to humans if they become less sure that their answers are correct. We also show that LLM input has ambiguous effects on human decision making: LLM input leads to an increase in the accuracy, but it more than doubles the extent of overconfidence in the answers.
Problem

Research questions and friction points this paper is trying to address.

LLMs inherit human overconfidence bias in reasoning tasks
LLMs show higher overconfidence than humans when uncertain
LLM input increases human accuracy but amplifies overconfidence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated construction of reasoning problems
LLM confidence assessment protocols
Comparative analysis of human and LLM biases
๐Ÿ”Ž Similar Papers
No similar papers found.