Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs

📅 2024-06-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM uncertainty estimation (UE) methods rely on hand-crafted scoring functions—e.g., length normalization or semantic weighting—that fail to adequately capture token-level probability miscalibration and intricate semantic dependencies, resulting in poor calibration. To address this, we propose Learnable Answer Response Scoring (LARS), the first end-to-end trainable UE scoring framework that explicitly jointly models token-level probability distributions and semantic contributions for calibrated uncertainty quantification. LARS employs supervised learning coupled with probabilistic aggregation to learn task-aware uncertainty scores. Evaluated on question answering and arithmetic reasoning benchmarks, LARS achieves up to a 16% absolute improvement in AUROC over state-of-the-art baselines. By eliminating reliance on manual feature engineering, LARS establishes a more reliable, data-driven, and differentiable paradigm for uncertainty quantification in generative LLMs.

Technology Category

Application Category

📝 Abstract
Uncertainty estimation (UE) of generative large language models (LLMs) is crucial for evaluating the reliability of generated sequences. A significant subset of UE methods utilize token probabilities to assess uncertainty, aggregating multiple token probabilities into a single UE score using a scoring function. Existing scoring functions for probability-based UE, such as length-normalized scoring and semantic contribution-based weighting, are designed to solve certain aspects of the problem but exhibit limitations, including the inability to handle biased probabilities and complex semantic dependencies between tokens. To address these issues, in this work, we propose Learnable Response Scoring (LARS) function, a novel scoring function that leverages supervised data to capture complex dependencies between tokens and probabilities, thereby producing more reliable and calibrated response scores in computing the uncertainty of LLM generations. Our comprehensive experiments across question-answering and arithmetical reasoning tasks with various datasets demonstrate that LARS significantly outperforms existing scoring functions, achieving improvements of up to 16% AUROC score.
Problem

Research questions and friction points this paper is trying to address.

Estimating uncertainty in generative LLMs
Improving scoring functions for token probabilities
Capturing complex semantic dependencies in responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trainable scoring function
Supervised data utilization
Improved uncertainty estimation
🔎 Similar Papers
No similar papers found.