🤖 AI Summary
This study systematically evaluates the calibration and correctness-predictive capability of Maximum Softmax Probability (MSP) across 15 dialogue-finetuned large language models (LLMs) on multiple-choice question answering. Method: We quantify systematic miscalibration in MSP while assessing its discriminative power between correct and incorrect answers; we further propose a novel, lightweight method for adaptive MSP confidence-threshold selection—requiring only ~100 labeled examples—and integrate it into a confidence-driven abstention mechanism. Contribution/Results: Despite pervasive miscalibration, MSP exhibits strong discriminative validity (p < 0.001), with discrimination performance positively correlated with answer accuracy but uncorrelated with calibration error. Our adaptive thresholding enables high coverage while significantly improving overall accuracy via selective abstention. This work is the first to demonstrate that MSP retains high discriminative efficacy even under miscalibration, establishing a new paradigm for lightweight, low-dependency trustworthy reasoning in LLMs.
📝 Abstract
We study 15 large language models (LLMs) fine-tuned for chat and find that their maximum softmax probabilities (MSPs) are consistently miscalibrated on multiple-choice Q&A. However, those MSPs might still encode useful uncertainty information. Specifically, we hypothesized that wrong answers would be associated with smaller MSPs compared to correct answers. Via rigorous statistical testing, we show that this hypothesis holds for models which perform well on the underlying Q&A task. We also find a strong direction correlation between Q&A accuracy and MSP correctness prediction, while finding no correlation between Q&A accuracy and calibration error. This suggests that within the current fine-tuning paradigm, we can expect correctness prediction but not calibration to improve as LLM capabilities progress. To demonstrate the utility of correctness prediction, we show that when models have the option to abstain, performance can be improved by selectively abstaining based on the MSP of the initial model response, using only a small amount of labeled data to choose the MSP threshold.