Optimal Self-Consistency for Efficient Reasoning with Large Language Models

๐Ÿ“… 2025-11-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Self-consistency (SC), a widely adopted inference technique for large language models, suffers from high computational overhead and lacks a unified scaling theory and sample-efficiency analysis. Method: This paper establishes, for the first time, a power-law scaling relationship between SC performance and sample size. Building on this theoretical foundation, we propose Blend-ASCโ€”a hyperparameter-free, budget-adaptive method that unifies mode estimation and voting theory to dynamically allocate sampling resources, achieving optimal trade-offs between fixed and adaptive sampling strategies. Contribution/Results: Experiments demonstrate that Blend-ASC achieves state-of-the-art reasoning accuracy across multiple benchmarks while requiring only 14.7% of the samples needed by standard SCโ€”yielding a 6.8ร— reduction. This substantially improves sample efficiency and computational scalability without compromising accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Self-consistency (SC) is a widely used test-time inference technique for improving performance in chain-of-thought reasoning. It involves generating multiple responses, or samples from a large language model (LLM) and selecting the most frequent answer. This procedure can naturally be viewed as a majority vote or empirical mode estimation. Despite its effectiveness, SC is prohibitively expensive at scale when naively applied to datasets, and it lacks a unified theoretical treatment of sample efficiency and scaling behavior. In this paper, we provide the first comprehensive analysis of SC's scaling behavior and its variants, drawing on mode estimation and voting theory. We derive and empirically validate power law scaling for self-consistency across datasets, and analyze the sample efficiency for fixed-allocation and dynamic-allocation sampling schemes. From these insights, we introduce Blend-ASC, a novel variant of self-consistency that dynamically allocates samples to questions during inference, achieving state-of-the-art sample efficiency. Our approach uses 6.8x fewer samples than vanilla SC on average, outperforming both fixed- and dynamic-allocation SC baselines, thereby demonstrating the superiority of our approach in terms of efficiency. In contrast to existing variants, Blend-ASC is hyperparameter-free and can fit an arbitrary sample budget, ensuring it can be easily applied to any self-consistency application.
Problem

Research questions and friction points this paper is trying to address.

Optimizing sample efficiency in self-consistency reasoning for large language models
Reducing computational costs of majority voting in chain-of-thought reasoning
Developing dynamic allocation methods for scalable self-consistency applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic sample allocation for efficient reasoning
Hyperparameter-free self-consistency variant
Power law scaling analysis for optimization
๐Ÿ”Ž Similar Papers
No similar papers found.