Are Retrials All You Need? Enhancing Large Language Model Reasoning Without Verbalized Feedback

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from high computational overhead and low efficiency in complex reasoning tasks due to reliance on self-assessment and verbalized feedback. To address this, we propose Feedback-Free Retry (FFR), a novel paradigm that eliminates explicit introspection and linguistic feedback. FFR employs zero-shot retry scheduling, a lightweight error-detection trigger mechanism, and multi-round non-iterative output sampling to achieve low-cost, low-latency performance gains. Evaluated on multiple reasoning benchmarks—including GSM8K, MATH, and TheoremQA—FFR consistently outperforms baselines such as Chain-of-Thought and Self-Refine, yielding average accuracy improvements of 3.2–7.8 percentage points while reducing inference token consumption by approximately 40%. Our key contribution is the first empirical demonstration that a minimal, feedback-free retry strategy—requiring no verbalized self-reflection—can substantially enhance LLM reasoning capability. This establishes a new paradigm for efficient and trustworthy reasoning.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) have catalyzed the development of general-purpose autonomous agents, demonstrating remarkable performance in complex reasoning tasks across various domains. This surge has spurred the evolution of a plethora of prompt-based reasoning frameworks. A recent focus has been on iterative reasoning strategies that refine outputs through self-evaluation and verbalized feedback. However, these strategies require additional computational complexity to enable models to recognize and correct their mistakes, leading to a significant increase in their cost. In this work, we introduce the concept of ``retrials without feedback'', an embarrassingly simple yet powerful mechanism for enhancing reasoning frameworks by allowing LLMs to retry problem-solving attempts upon identifying incorrect answers. Unlike conventional iterative refinement methods, our method does not require explicit self-reflection or verbalized feedback, simplifying the refinement process. Our findings indicate that simpler retrial-based approaches often outperform more sophisticated reasoning frameworks, suggesting that the benefits of complex methods may not always justify their computational costs. By challenging the prevailing assumption that more intricate reasoning strategies inherently lead to better performance, our work offers new insights into how simpler, more efficient approaches can achieve optimal results. So, are retrials all you need?
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM reasoning without verbalized feedback
Reducing computational cost in iterative refinement methods
Comparing retrial-based approaches with complex reasoning frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrials without feedback enhance reasoning
Simplifies refinement by avoiding verbalized feedback
Outperforms complex methods with lower cost
🔎 Similar Papers
No similar papers found.