🤖 AI Summary
Can language models autonomously optimize their responses without external verification signals? Prior work predominantly focuses on verifiable, closed-domain tasks, failing to capture the complexity of real-world user needs expressed via open-ended feedback. This paper introduces RefineBench: the first open-domain self-refinement benchmark comprising 1,000 questions across 11 diverse domains. We propose a fine-grained checklist-based evaluation framework that formally distinguishes *guided refinement*—leveraging natural-language feedback—from *autonomous refinement*—requiring no external guidance. Our human–AI collaborative, iterative pairwise evaluation protocol ensures rigorous assessment. Experiments reveal that state-of-the-art models (e.g., GPT-5) achieve only marginal autonomous improvement (+29.1%), yet rapidly approach near-perfect performance within five guided refinement steps. This work exposes a fundamental limitation in current LLMs’ self-improvement capability and establishes a reproducible, principled evaluation paradigm for open-domain response refinement.
📝 Abstract
Can language models (LMs) self-refine their own responses? This question is increasingly relevant as a wide range of real-world user interactions involve refinement requests. However, prior studies have largely tested LMs' refinement abilities on verifiable tasks such as competition math or symbolic reasoning with simplified scaffolds, whereas users often pose open-ended queries and provide varying degrees of feedback on what they desire. The recent advent of reasoning models that exhibit self-reflection patterns in their chains-of-thought further motivates this question. To analyze this, we introduce RefineBench, a benchmark of 1,000 challenging problems across 11 domains paired with a checklist-based evaluation framework. We evaluate two refinement modes: (1) guided refinement, where an LM is provided natural language feedback, and (2) self-refinement, where LMs attempt to improve without guidance. In the self-refinement setting, even frontier LMs such as Gemini 2.5 Pro and GPT-5 achieve modest baseline scores of 31.3% and 29.1%, respectively, and most models fail to consistently improve across iterations (e.g., Gemini-2.5-Pro gains only +1.8%, while DeepSeek-R1 declines by -0.1%). By contrast, in guided refinement, both proprietary LMs and large open-weight LMs (>70B) can leverage targeted feedback to refine responses to near-perfect levels within five turns. These findings suggest that frontier LMs require breakthroughs to self-refine their incorrect responses, and that RefineBench provides a valuable testbed for tracking progress.