LLMs Can Generate a Better Answer by Aggregating Their Own Responses

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) rely on prompting techniques—such as self-correction and response selection—to improve performance on complex tasks; however, these methods are fundamentally constrained by the models’ limited discriminative capability, stemming from the absence of explicit supervision for discriminative judgment during pretraining. To address this, we propose Generative Self-Aggregation (GSA): a purely generative response aggregation framework that samples diverse candidate responses and fuses them into a superior output via context-aware synthesis and lightweight aggregation—without requiring either discriminative model capabilities or verifiable tokens. GSA is the first discriminative-free, general-purpose aggregation method applicable across mathematical reasoning, knowledge-intensive question answering, code generation, and open-ended dialogue. Experiments demonstrate that GSA consistently outperforms baselines—including self-consistency—across multiple benchmarks, achieving substantial gains in both accuracy and response quality.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown remarkable capabilities across tasks, yet they often require additional prompting techniques when facing complex problems. While approaches like self-correction and response selection have emerged as popular solutions, recent studies have shown these methods perform poorly when relying on the LLM itself to provide feedback or selection criteria. We argue this limitation stems from the fact that common LLM post-training procedures lack explicit supervision for discriminative judgment tasks. In this paper, we propose Generative Self-Aggregation (GSA), a novel prompting method that improves answer quality without requiring the model's discriminative capabilities. GSA first samples multiple diverse responses from the LLM, then aggregates them to obtain an improved solution. Unlike previous approaches, our method does not require the LLM to correct errors or compare response quality; instead, it leverages the model's generative abilities to synthesize a new response based on the context of multiple samples. While GSA shares similarities with the self-consistency (SC) approach for response aggregation, SC requires specific verifiable tokens to enable majority voting. In contrast, our approach is more general and can be applied to open-ended tasks. Empirical evaluation demonstrates that GSA effectively improves response quality across various tasks, including mathematical reasoning, knowledge-based problems, and open-ended generation tasks such as code synthesis and conversational responses.
Problem

Research questions and friction points this paper is trying to address.

Improves LLM answer quality without discriminative capabilities
Aggregates multiple responses to synthesize better solutions
Applies to open-ended tasks without verifiable tokens
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Self-Aggregation (GSA) method introduced
Aggregates multiple diverse LLM responses
Improves answer quality without discriminative capabilities
🔎 Similar Papers
No similar papers found.
Zichong Li
Zichong Li
Georgia Institute of Technology
Deep learning
X
Xinyu Feng
Georgia Tech
Y
Yuheng Cai
Georgia Tech
Zixuan Zhang
Zixuan Zhang
Georgia Institute of Technology
Machine Learning
T
Tianyi Liu
Amazon
C
Chen Liang
Microsoft Azure
Weizhu Chen
Weizhu Chen
Microsoft, Technical Fellow
Deep LearningNLPNatural Language Processingmachine learning
H
Haoyu Wang
University at Albany
T
Tuo Zhao
Georgia Tech