Self-ensemble: Mitigating Confidence Distortion for Large Language Models

๐Ÿ“… 2025-06-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) suffer from confidence miscalibration in multiple-choice question answering (MCQA): as the number of answer options increases, confidence in correct answers is systematically underestimated while incorrect options are overestimated, leading to substantial performance degradation. To address this, we propose a training-free, plug-and-play self-ensemble inference frameworkโ€”novel in its dynamic grouping-based selection and cross-group predictive integration. By leveraging customized attention masks and position encodings, our method adaptively groups candidate answers and performs weighted fusion of predictions, thereby calibrating confidence directly at the inference stage. Crucially, it requires no labeled data or model fine-tuning. Evaluated across three major LLM families and multiple standard MCQA benchmarks, our approach consistently improves accuracy, effectively mitigates confidence miscalibration, and outperforms both standard decoding strategies and state-of-the-art baselines.

Technology Category

Application Category

๐Ÿ“ Abstract
Although Large Language Models (LLMs) perform well in general fields, they exhibit a confidence distortion problem on multi-choice question-answering (MCQA), particularly as the number of answer choices increases. Specifically, on MCQA with many choices, LLMs suffer from under-confidence in correct predictions and over-confidence in incorrect ones, leading to a substantially degraded performance. To solve this problem, we propose Self-ensemble in this work. Our method splits the choices into several groups and ensembles LLM predictions across these groups to reach a final decision. The advantage of Self-ensemble is its plug-and-play nature, where it can be integrated into existing LLM architecture based on a designed attention mask and positional encoding, without requiring labeled datasets for parameter tuning. Experimental results on three LLMs and datasets demonstrate that Self-ensemble comprehensively addresses the confidence distortion problem of LLMs, outperforming standard inference as well as baseline methods.
Problem

Research questions and friction points this paper is trying to address.

LLMs show confidence distortion in multi-choice questions
Under-confidence in correct answers and over-confidence in wrong ones
Self-ensemble method improves LLM performance without labeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-ensemble groups choices for better predictions
Plug-and-play integration with existing LLM architecture
Uses attention mask and positional encoding
๐Ÿ”Ž Similar Papers
No similar papers found.