🤖 AI Summary
Self-consistency decoding (SC) improves large language model (LLM) reasoning accuracy by sampling numerous, often lengthy, reasoning paths—incuring prohibitive computational overhead.
Method: We propose Confidence-guided Self-Consistency (CISC), the first method to leverage the LLM’s *own* generated confidence scores as weights for majority voting over candidate answers. CISC introduces an in-context confidence assessment paradigm—requiring no external annotations or additional parameters—thereby fully exploiting the LLM’s intrinsic self-evaluation capability.
Contribution/Results: Extensive experiments across nine mainstream LLMs and four reasoning benchmarks demonstrate that CISC reduces the required number of sampled reasoning paths by over 40% on average, while consistently outperforming standard SC in accuracy. Crucially, results validate that LLM-generated confidence scores are both reliably calibrated and discriminative—establishing a new paradigm for efficient, trustworthy reasoning decoding.
📝 Abstract
Self-consistency decoding enhances LLMs' performance on reasoning tasks by sampling diverse reasoning paths and selecting the most frequent answer. However, it is computationally expensive, as sampling many of these (lengthy) paths is required to increase the chances that the correct answer emerges as the most frequent one. To address this, we introduce Confidence-Informed Self-Consistency (CISC). CISC performs a weighted majority vote based on confidence scores obtained directly from the model. By prioritizing high-confidence paths, it can identify the correct answer with a significantly smaller sample size. When tested on nine models and four datasets, CISC outperforms self-consistency in nearly all configurations, reducing the required number of reasoning paths by over 40% on average. In addition, we introduce the notion of within-question confidence evaluation, after showing that standard evaluation methods are poor predictors of success in distinguishing correct and incorrect answers to the same question. In fact, the most calibrated confidence method proved to be the least effective for CISC. Lastly, beyond these practical implications, our results and analyses show that LLMs can effectively judge the correctness of their own outputs, contributing to the ongoing debate on this topic.