GrACE: A Generative Approach to Better Confidence Elicitation in Large Language Models

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead and poor calibration performance in confidence estimation for large language models (LLMs), this paper proposes GrACE—a generative confidence estimation method requiring no additional sampling or auxiliary models. Its core innovation lies in directly computing confidence scores via cosine similarity between the model’s final-layer hidden states and learnable special-token embeddings, followed by end-to-end fine-tuning for precise calibration. Additionally, GrACE introduces two confidence-guided test-time expansion strategies. Experiments across three mainstream LLMs and two benchmark datasets demonstrate that GrACE significantly outperforms six baseline methods in both discrimination ability and calibration quality. Crucially, it achieves these gains while substantially reducing inference-time computational cost and improving decision accuracy—offering an efficient, reliable confidence estimation framework suitable for high-stakes domains such as healthcare and finance.

Technology Category

Application Category

📝 Abstract
Assessing the reliability of Large Language Models (LLMs) by confidence elicitation is a prominent approach to AI safety in high-stakes applications, such as healthcare and finance. Existing methods either require expensive computational overhead or suffer from poor calibration, making them impractical and unreliable for real-world deployment. In this work, we propose GrACE, a Generative Approach to Confidence Elicitation that enables scalable and reliable confidence elicitation for LLMs. GrACE adopts a novel mechanism in which the model expresses confidence by the similarity between the last hidden state and the embedding of a special token appended to the vocabulary, in real-time. We fine-tune the model for calibrating the confidence with calibration targets associated with accuracy. Experiments with three LLMs and two benchmark datasets show that the confidence produced by GrACE achieves the best discriminative capacity and calibration on open-ended generation tasks, outperforming six competing methods without resorting to additional sampling or an auxiliary model. Moreover, we propose two strategies for improving test-time scaling based on confidence induced by GrACE. Experimental results show that using GrACE not only improves the accuracy of the final decision but also significantly reduces the number of required samples in the test-time scaling scheme, indicating the potential of GrACE as a practical solution for deploying LLMs with scalable, reliable, and real-time confidence estimation.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM reliability via confidence elicitation for AI safety
Overcoming expensive computation and poor calibration in existing methods
Enabling scalable, reliable real-time confidence estimation without auxiliary models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative confidence elicitation via hidden state similarity
Fine-tuning for calibration with accuracy targets
Test-time scaling strategies reducing sample requirements
🔎 Similar Papers
No similar papers found.