π€ AI Summary
To address the challenge of quantifying and evaluating output uncertainty in large language models (LLMs), this paper introduces LM-Polygraphβthe first unified uncertainty quantification (UQ) benchmark tailored for LLMs. It encompasses 11 diverse text generation tasks and systematically integrates mainstream UQ and confidence normalization techniques, including Monte Carlo Dropout, ensemble methods, token-level entropy, and calibration-based normalization, within a standardized, reproducible evaluation framework. Key contributions include: (1) the first controlled, cross-task consistent evaluation of UQ methods in LLM-generated text; (2) the proposal of interpretability-driven confidence metrics that empirically reveal generalization patterns across UQ techniques; and (3) the identification of an optimal UQ combination, demonstrating that normalization significantly improves confidence calibration. The code, models, and evaluation toolkit are publicly released to advance standardization in UQ research.
π Abstract
The rapid proliferation of large language models (LLMs) has stimulated researchers to seek effective and efficient approaches to deal with LLM hallucinations and low-quality outputs. Uncertainty quantification (UQ) is a key element of machine learning applications in dealing with such challenges. However, research to date on UQ for LLMs has been fragmented in terms of techniques and evaluation methodologies. In this work, we address this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines and offers an environment for controllable and consistent evaluation of novel UQ techniques over various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across eleven tasks, identifying the most effective approaches. Code: https://github.com/IINemo/lm-polygraph Benchmark: https://huggingface.co/LM-Polygraph