COPU: Conformal Prediction for Uncertainty Quantification in Natural Language Generation

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing uncertainty quantification for large language model (LLM) text generation suffers from inaccurate calibration, while conformal prediction (CP) in natural language generation (NLG) often omits ground-truth tokens and fails to achieve target coverage due to sampling bias. Method: This work introduces the first explicit injection of ground-truth candidates into NLG and proposes a logit-based nonconformity score to reconstruct the candidate set, strictly adhering to the CP framework. It guarantees theoretically valid coverage across a broad range of target error rates (α ∈ [0.05, 0.3]). Contribution/Results: Evaluated on six state-of-the-art LLMs and four canonical NLG tasks, our approach reduces calibration error by up to 47%, achieves empirically stable coverage matching theoretical targets, and significantly outperforms existing baselines. Its core contribution is establishing the first CP-based uncertainty quantification paradigm for NLG with rigorous, distribution-free coverage guarantees.

Technology Category

Application Category

📝 Abstract
Uncertainty Quantification (UQ) for Natural Language Generation (NLG) is crucial for assessing the performance of Large Language Models (LLMs), as it reveals confidence in predictions, identifies failure modes, and gauges output reliability. Conformal Prediction (CP), a model-agnostic method that generates prediction sets with a specified error rate, has been adopted for UQ in classification tasks, where the size of the prediction set indicates the model's uncertainty. However, when adapting CP to NLG, the sampling-based method for generating candidate outputs cannot guarantee the inclusion of the ground truth, limiting its applicability across a wide range of error rates. To address this, we propose ourmethod, a method that explicitly adds the ground truth to the candidate outputs and uses logit scores to measure nonconformity. Our experiments with six LLMs on four NLG tasks show that ourmethod outperforms baseline methods in calibrating error rates and empirical cover rates, offering accurate UQ across a wide range of user-specified error rates.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in Natural Language Generation.
Ensure inclusion of ground truth in predictions.
Calibrate error rates across diverse NLG tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conformal Prediction for NLG
Ground truth in candidate outputs
Logit scores measure nonconformity
🔎 Similar Papers
No similar papers found.