🤖 AI Summary
This paper addresses the pervasive overprecision problem in large language models (LLMs)—their tendency to generate implausibly narrow confidence intervals and exhibit poor calibration—by proposing the first systematic black-box evaluation framework. Methodologically, it introduces a three-stage pipeline: instruction-driven interval generation, response refinement, and statistical calibration assessment—explicitly avoiding hallucination- and bias-prone verbalized confidence measures. The work innovatively adapts the cognitive science concept of overprecision to LLM trustworthiness research, establishing a cross-task, multi-scale benchmarking paradigm. Experimental results reveal: (1) no correlation between generated interval width and instructed confidence level; (2) refinement fails to improve calibration; and (3) widespread miscalibration across models, indicating both a fundamental lack of internal understanding of confidence and an inability to reliably follow confidence-related instructions.
📝 Abstract
Recently, overconfidence in large language models (LLMs) has garnered considerable attention due to its fundamental importance in quantifying the trustworthiness of LLM generation. However, existing approaches prompt the extit{black box LLMs} to produce their confidence ( extit{verbalized confidence}), which can be subject to many biases and hallucinations. Inspired by a different aspect of overconfidence in cognitive science called extit{overprecision}, we designed a framework for its study in black box LLMs. This framework contains three main phases: 1) generation, 2) refinement and 3) evaluation. In the generation phase we prompt the LLM to generate answers to numerical questions in the form of intervals with a certain level of confidence. This confidence level is imposed in the prompt and not required for the LLM to generate as in previous approaches. We use various prompting techniques and use the same prompt multiple times to gauge the effects of randomness in the generation process. In the refinement phase, answers from the previous phase are refined to generate better answers. The LLM answers are evaluated and studied in the evaluation phase to understand its internal workings. This study allowed us to gain various insights into LLM overprecision: 1) LLMs are highly uncalibrated for numerical tasks 2) {color{blue}there is no correlation between the length of the interval and the imposed confidence level, which can be symptomatic of a a) lack of understanding of the concept of confidence or b) inability to adjust self-confidence by following instructions}, {color{blue}3)} LLM numerical precision differs depending on the task, scale of answer and prompting technique {color{blue}4) Refinement of answers doesn't improve precision in most cases}. We believe this study offers new perspectives on LLM overconfidence and serves as a strong baseline for overprecision in LLMs.