🤖 AI Summary
Existing LLM self-refinement methods rely on fixed iteration counts, lacking responsiveness to the dynamic nature of the generation process. This work proposes ProActive Self-Refinement (PASR), the first framework to introduce an *active decision-making mechanism* during generation: it dynamically determines *whether*, *when*, and *how* to perform self-correction based on real-time analysis of internal model states and evolving context, enabling fine-grained online optimization. PASR employs a lightweight, parameter-free control module that requires no additional training and operates entirely at inference time. Evaluated across ten diverse tasks, PASR improves accuracy by 8.2% over standard autoregressive generation while reducing token consumption by 41.6%, substantially enhancing both inference quality and efficiency. Its core contribution is breaking the passive, fixed-step iteration paradigm—establishing the first *generation-driven*, actively adaptive self-refinement framework for LLMs.
📝 Abstract
Recent advances in self-refinement have demonstrated significant potential for improving the outputs of large language models (LLMs) through iterative refinement. However, most existing self-refinement methods rely on a reactive process with a fixed number of iterations, making it difficult to determine the optimal timing and content of refinement based on the evolving generation context. Inspired by the way humans dynamically refine their thoughts during execution, we propose ProActive Self-Refinement (PASR), a novel method that enables LLMs to refine their outputs during the generation process. Unlike methods that regenerate entire responses, PASR proactively decides whether, when, and how to refine based on the model's internal state and evolving context. We conduct extensive experiments on a diverse set of 10 tasks to evaluate the effectiveness of PASR. Experimental results show that PASR significantly enhances problem-solving performance. In particular, on Qwen3-8B, PASR reduces average token consumption by 41.6 percent compared to standard generation, while also achieving an 8.2 percent improvement in accuracy. Our code and all baselines used in the paper are available in the GitHub.