Self-Correcting Large Language Models: Generation vs. Multiple Choice

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how task structure—specifically, open-ended generation versus multiple-choice selection—affects large language models’ (LLMs) self-correction capabilities. We conduct controlled comparative experiments across diverse model families and scales on natural language understanding and reasoning benchmarks, systematically analyzing the efficacy of self-consistency and self-reflection strategies under distinct output-space constraints. Results reveal a fundamental trade-off: open-ended generation enables richer semantic reconstruction and compositional optimization but exhibits lower output stability; in contrast, multiple-choice selection yields higher decision consistency yet remains critically dependent on candidate set quality. To our knowledge, this is the first work to uncover the deep interplay between task formalization, output space geometry, and self-correction efficacy. We propose a “task-adaptive” self-correction design principle, offering both theoretical insights and practical guidelines for enhancing reliability in LLM-based reasoning and decision-making agents.

Technology Category

Application Category

📝 Abstract
Large language models have recently demonstrated remarkable abilities to self-correct their responses through iterative refinement, often referred to as self-consistency or self-reflection. However, the dynamics of this self-correction mechanism may differ substantially depending on whether the model is tasked with open-ended text generation or with selecting the most appropriate response from multiple predefined options. In this paper, we conduct a systematic investigation of these two paradigms by comparing performance trends and error-correction behaviors across various natural language understanding and reasoning tasks, covering language models of different scales and families. Our experimental results reveal distinct patterns of improvement and failure modes: extit{While open-ended generation often benefits from the flexibility of re-interpretation and compositional refinement, multiple-choice selection can leverage clearer solution boundaries but may be limited by the provided options}. This contrast also reflects the dual demands faced by emerging agentic LLM applications: effective agents must not only generate and refine open-ended plans or explanations, but also make reliable discrete choices when operating within constrained action spaces. Our findings, therefore, highlight that the design of self-correction mechanisms should take into account the interaction between task structure and output space, with implications for both knowledge-intensive reasoning and decision-oriented applications of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Comparing self-correction in open-ended generation versus multiple-choice tasks
Analyzing performance trends across different model scales and reasoning tasks
Investigating how task structure interacts with self-correction mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-correction mechanisms differ between generation and multiple-choice tasks
Open-ended generation benefits from flexibility and compositional refinement
Multiple-choice selection leverages clear boundaries but is option-limited
🔎 Similar Papers
No similar papers found.