🤖 AI Summary
Large language models frequently produce overconfident and incorrect responses when they should abstain, yet existing evaluation metrics—such as Expected Calibration Error (ECE) and Area Under the Risk-Coverage curve (AURC)—fail to account for how confidence guides decisions under varying risk preferences. This work proposes the Behavioral Alignment Score (BAS), which introduces decision theory into confidence evaluation for the first time. BAS employs a utility-based model with asymmetric penalties to quantify the practical utility of a model’s “answer-or-abstain” decisions across a continuum of risk thresholds. It uncovers confidence deficiencies invisible to standard metrics and effectively discriminates between models that exhibit markedly different decision reliability despite similar conventional performance. Experiments reveal pervasive overconfidence in state-of-the-art models, while also demonstrating that simple post-processing can substantially improve confidence reliability.
📝 Abstract
Large language models (LLMs) often produce confident but incorrect answers in settings where abstention would be safer. Standard evaluation protocols, however, require a response and do not account for how confidence should guide decisions under different risk preferences. To address this gap, we introduce the Behavioral Alignment Score (BAS), a decision-theoretic metric for evaluating how well LLM confidence supports abstention-aware decision making. BAS is derived from an explicit answer-or-abstain utility model and aggregates realized utility across a continuum of risk thresholds, yielding a measure of decision-level reliability that depends on both the magnitude and ordering of confidence. We show theoretically that truthful confidence estimates uniquely maximize expected BAS utility, linking calibration to decision-optimal behavior. BAS is related to proper scoring rules such as log loss, but differs structurally: log loss penalizes underconfidence and overconfidence symmetrically, whereas BAS imposes an asymmetric penalty that strongly prioritizes avoiding overconfident errors. Using BAS alongside widely used metrics such as ECE and AURC, we then construct a benchmark of self-reported confidence reliability across multiple LLMs and tasks. Our results reveal substantial variation in decision-useful confidence, and while larger and more accurate models tend to achieve higher BAS, even frontier models remain prone to severe overconfidence. Importantly, models with similar ECE or AURC can exhibit very different BAS due to highly overconfident errors, highlighting limitations of standard metrics. We further show that simple interventions, such as top-$k$ confidence elicitation and post-hoc calibration, can meaningfully improve confidence reliability. Overall, our work provides both a principled metric and a comprehensive benchmark for evaluating LLM confidence reliability.