🤖 AI Summary
This work addresses the systematic underestimation of confidence in existing training-free calibration methods when multiple correct answers exist, a bias stemming from semantic discrepancies among valid responses. To investigate this issue, the authors introduce MACE, a benchmark comprising 12,000 cross-domain questions with multiple valid answers, which for the first time systematically reveals this calibration bias. They propose Semantic Confidence Aggregation (SCA), a novel method that leverages large language models to cluster semantically similar answers and aggregate their confidences, enabling unified calibration across both single- and multi-answer scenarios. Experiments demonstrate that SCA significantly outperforms 15 existing calibration methods on MACE, achieving state-of-the-art calibration performance in mixed-answer settings while maintaining strong calibration capability on traditional single-answer tasks.
📝 Abstract
Confidence calibration is essential for making large language models (LLMs) reliable, yet existing training-free methods have been primarily studied under single-answer question answering. In this paper, we show that these methods break down in the presence of multiple valid answers, where disagreement among equally correct responses leads to systematic underestimation of confidence. To enable a systematic study of this phenomenon, we introduce MACE, a benchmark of 12,000 factual questions spanning six domains with varying numbers of correct answers. Experiments across 15 representative calibration methods and four LLM families (7B-72B) reveal that while accuracy increases with answer cardinality, estimated confidence consistently decreases, causing severe miscalibration for questions with mixed answer counts. To address this issue, we propose Semantic Confidence Aggregation (SCA), which aggregates confidence over multiple high-probability sampled responses. SCA achieves state-of-the-art calibration performance under mixed-answer settings while preserving strong calibration on single-answer questions.