🤖 AI Summary
This work addresses the unreliable confidence estimates of post-trained large language models (PoLLMs), which often suffer from overconfidence. To tackle this issue, the authors propose BaseCal, a plug-and-play, unsupervised calibration method that requires neither human annotations nor model modifications. BaseCal is the first to systematically leverage the well-calibrated nature of the corresponding base large language model (base LLM) to recalibrate PoLLM confidence scores. It introduces two strategies: BaseCal-ReEval, which re-evaluates predictions using the base LLM, and BaseCal-Proj, which employs a lightweight projection network, both mapping PoLLM outputs through the base LLM’s output layer. Evaluated across five datasets and three large language models, BaseCal reduces the expected calibration error (ECE) by 42.90% on average, substantially outperforming existing unsupervised baselines.
📝 Abstract
Reliable confidence is essential for trusting the outputs of LLMs, yet widely deployed post-trained LLMs (PoLLMs) typically compromise this trust with severe overconfidence. In contrast, we observe that their corresponding base LLMs often remain well-calibrated. This naturally motivates us to calibrate PoLLM confidence using the base LLM as a reference. This work proposes two ways to achieve this. A straightforward solution, BaseCal-ReEval, evaluates PoLLM's responses by feeding them into the base LLM to get average probabilities as confidence. While effective, this approach introduces additional inference overhead. To address this, we propose BaseCal-Proj, which trains a lightweight projection to map the final-layer hidden states of PoLLMs back to those of their base LLMs. These projected states are then processed by the base LLM's output layer to derive base-calibrated confidence for PoLLM's responses. Notably, BaseCal is an unsupervised, plug-and-play solution that operates without human labels or LLM modifications. Experiments across five datasets and three LLM families demonstrate the effectiveness of BaseCal, reducing Expected Calibration Error (ECE) by an average of 42.90\% compared to the best unsupervised baselines.