π€ AI Summary
Consistency-based uncertainty quantification (UQ) for short-answer question answering (QA) suffers from unreliable estimates due to high variance and response redundancy when employing polynomial sampling.
Method: This work introduces beam search into the consistency-based UQ framework for the first time, enabling generation of high-probability, controllably diverse candidate answer sets. We derive a theoretical lower bound on the beam setβs probability mass, rigorously guaranteeing improved error upper bounds; further, we integrate consistency aggregation with uncertainty calibration to enhance estimation stability.
Results: Extensive experiments across six standard QA benchmarks demonstrate state-of-the-art UQ performance: consistency score variance is reduced by up to 42%, mean estimation error decreases by 19.3% relative to polynomial sampling, and results exhibit strong reproducibility.
π Abstract
Consistency-based methods have emerged as an effective approach to uncertainty quantification (UQ) in large language models. These methods typically rely on several generations obtained via multinomial sampling, measuring their agreement level. However, in short-form QA, multinomial sampling is prone to producing duplicates due to peaked distributions, and its stochasticity introduces considerable variance in uncertainty estimates across runs. We introduce a new family of methods that employ beam search to generate candidates for consistency-based UQ, yielding improved performance and reduced variance compared to multinomial sampling. We also provide a theoretical lower bound on the beam set probability mass under which beam search achieves a smaller error than multinomial sampling. We empirically evaluate our approach on six QA datasets and find that its consistent improvements over multinomial sampling lead to state-of-the-art UQ performance.