MCQA-Eval: Efficient Confidence Evaluation in NLG with Gold-Standard Correctness Labels

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing NLG confidence estimation relies on noisy, biased heuristic correctness functions, leading to distorted evaluation metrics and erroneous method rankings. Method: We propose the first correctness-function-free unified evaluation framework, leveraging gold-standard labels from multiple-choice question-answering (MCQA) datasets to enable fair, apples-to-apples comparison of both white-box and black-box confidence methods. Our framework integrates logit-based analysis and response consistency measurement, and we conduct cross-model and cross-dataset benchmarking across multiple LLMs and mainstream QA benchmarks. Contribution/Results: The framework significantly improves assessment reliability, eliminates systematic biases, and corrects previously misranked relative performances of state-of-the-art confidence methods. It establishes a reproducible, unbiased, and low-cost supervised evaluation paradigm for NLG trustworthiness research—requiring no human annotation, model fine-tuning, or task-specific correctness heuristics.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) require robust confidence estimation, particularly in critical domains like healthcare and law where unreliable outputs can lead to significant consequences. Despite much recent work in confidence estimation, current evaluation frameworks rely on correctness functions -- various heuristics that are often noisy, expensive, and possibly introduce systematic biases. These methodological weaknesses tend to distort evaluation metrics and thus the comparative ranking of confidence measures. We introduce MCQA-Eval, an evaluation framework for assessing confidence measures in Natural Language Generation (NLG) that eliminates dependence on an explicit correctness function by leveraging gold-standard correctness labels from multiple-choice datasets. MCQA-Eval enables systematic comparison of both internal state-based white-box (e.g. logit-based) and consistency-based black-box confidence measures, providing a unified evaluation methodology across different approaches. Through extensive experiments on multiple LLMs and widely used QA datasets, we report that MCQA-Eval provides efficient and more reliable assessments of confidence estimation methods than existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Robust confidence estimation in LLMs
Eliminating noisy correctness heuristics
Unified evaluation of confidence measures
Innovation

Methods, ideas, or system contributions that make the work stand out.

gold-standard correctness labels
eliminates explicit correctness function
unified evaluation methodology
🔎 Similar Papers
No similar papers found.