Reinforcement Learning for Better Verbalized Confidence in Long-Form Generation

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from severe hallucination in long-text generation, while existing confidence estimation methods are restricted to short-form question answering and incur high computational overhead. Method: This paper proposes LoVeC—the first learnable confidence modeling framework that extends verbalized confidence to long-text generation. It employs reinforcement learning (DPO, ORPO, or GRPO) to train LLMs to output a numerical confidence token immediately after each generated sentence, enabling lightweight, real-time, and interpretable factual consistency monitoring. Contribution/Results: LoVeC introduces two novel evaluation paradigms—free annotation and iterative annotation—to support flexible on-policy and off-policy adaptation. Evaluated on three long-text QA benchmarks, LoVeC significantly improves calibration accuracy and exhibits strong cross-domain generalization. It incurs negligible computational overhead, adding only a few extra tokens per sentence.

Technology Category

Application Category

📝 Abstract
Hallucination remains a major challenge for the safe and trustworthy deployment of large language models (LLMs) in factual content generation. Prior work has explored confidence estimation as an effective approach to hallucination detection, but often relies on post-hoc self-consistency methods that require computationally expensive sampling. Verbalized confidence offers a more efficient alternative, but existing approaches are largely limited to short-form question answering (QA) tasks and do not generalize well to open-ended generation. In this paper, we propose LoVeC (Long-form Verbalized Confidence), an on-the-fly verbalized confidence estimation method for long-form generation. Specifically, we use reinforcement learning (RL) to train LLMs to append numerical confidence scores to each generated statement, serving as a direct and interpretable signal of the factuality of generation. Our experiments consider both on-policy and off-policy RL methods, including DPO, ORPO, and GRPO, to enhance the model calibration. We introduce two novel evaluation settings, free-form tagging and iterative tagging, to assess different verbalized confidence estimation methods. Experiments on three long-form QA datasets show that our RL-trained models achieve better calibration and generalize robustly across domains. Also, our method is highly efficient, as it only requires adding a few tokens to the output being decoded.
Problem

Research questions and friction points this paper is trying to address.

Detecting hallucinations in long-form LLM generation efficiently
Improving verbalized confidence estimation for open-ended factual content
Enhancing model calibration via RL without expensive sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for verbalized confidence
On-the-fly confidence scores in generation
Efficient RL methods like DPO, ORPO
🔎 Similar Papers
No similar papers found.