Refining Answer Distributions for Improved Large Language Model Reasoning

📅 2024-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM inference fusion methods—such as self-consistency and progressive prompting—exhibit low efficiency in leveraging multiple responses and struggle to precisely identify the optimal reasoning path. To address this, we propose the Refined Answer Distribution (RAD) framework, which for the first time formalizes multi-step reasoning as a *mode search problem*. RAD iteratively samples responses to construct a Monte Carlo approximation of the answer distribution, then applies kernel density estimation to dynamically locate the mode—the most probable answer—enabling adaptive response integration. Unlike static aggregation schemes, RAD introduces a closed-loop mechanism combining multi-step prompting with response feedback. Evaluated on GSM8K, SVAMP, and CommonsenseQA, RAD consistently outperforms baseline methods, achieving average accuracy gains of 3.2–5.7 percentage points while demonstrating superior robustness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have exhibited an impressive capability to perform reasoning tasks, especially if they are encouraged to generate a sequence of intermediate steps. Reasoning performance can be improved by suitably combining multiple LLM responses, generated either in parallel in a single query, or via sequential interactions with LLMs throughout the reasoning process. Existing strategies for combination, such as self-consistency and progressive-hint-prompting, make inefficient usage of the LLM responses. We present Refined Answer Distributions, a novel and principled algorithmic framework to enhance the reasoning capabilities of LLMs. Our approach can be viewed as an iterative sampling strategy for forming a Monte Carlo approximation of an underlying distribution of answers, with the goal of identifying the mode -- the most likely answer. Empirical evaluation on several reasoning benchmarks demonstrates the superiority of the proposed approach.
Problem

Research questions and friction points this paper is trying to address.

Improving reasoning in LLMs by refining answer distributions
Enhancing efficiency in combining multiple LLM responses
Identifying most likely answers via iterative sampling strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative sampling for Monte Carlo approximation
Refined Answer Distributions for reasoning enhancement
Identifying mode from underlying answer distributions