🤖 AI Summary
To address the high computational cost and latency of self-consistency in large language model (LLM) inference, this paper proposes **Path Consistency (PC)**—a dynamic path optimization mechanism. PC identifies high-confidence common prefixes across multiple parallel decoding paths and leverages them to conditionally prune low-probability branches and guide subsequent generation, replacing conventional stochastic multi-sampling. The method integrates confidence-aware sampling, prefix consistency detection, and conditional decoding, and is fully compatible with standard autoregressive architectures. Experiments across mathematical reasoning, commonsense reasoning, symbolic manipulation, and code generation demonstrate that PC maintains or improves accuracy while reducing inference latency by 7.8%–40.5%. This work establishes the first prefix-consistency-driven dynamic path pruning paradigm, significantly enhancing inference efficiency and computational resource utilization.
📝 Abstract
To enhance the reasoning capabilities of large language models (LLMs), self-consistency has gained significant popularity by combining multiple sampling with majority voting. However, the state-of-the-art self-consistency approaches consume substantial computational resources and lead to significant additional time costs due to the multiple sampling. This prevents its full potential from being realized in scenarios where computational resources are critical. To improve the inference efficiency, this paper introduces extit{path-consistency}, a method that leverages the confidence of answers generated in earlier branches to identify the prefix of the most promising path. By dynamically guiding the generation of subsequent branches based on this prefix, the extit{path-consistency} mitigates both the errors and redundancies from random or less useful sampling in self-consistency. As a result, it can significantly accelerate the inference process by reducing the number of tokens generated. Our extensive empirical evaluation shows that the extit{path-consistency} achieves significant acceleration in inference latency ranging from $7.8%$ to $40.5%$, while maintaining or even improving task accuracy across different datasets, including mathematical reasoning, common sense reasoning, symbolic reasoning, and code generation.