EvoIdeator: Evolving Scientific Ideas through Checklist-Grounded Reinforcement Learning

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of large language models in iteratively refining nascent scientific ideas into high-quality research proposals due to the absence of fine-grained, actionable feedback mechanisms. The authors propose a novel reinforcement learning framework that, for the first time, integrates structured checklist-based feedback with multidimensional lexicographic rewards into both training and inference to systematically enhance the rigor, feasibility, and evidential grounding of scientific proposals. Leveraging the Qwen3-4B model, the approach employs a structured critic to generate span-level linguistic feedback and multidimensional reward signals for policy optimization. Experiments demonstrate that this method significantly outperforms larger state-of-the-art models across multiple scientific evaluation metrics and generalizes effectively to diverse external feedback sources without requiring fine-tuning.

Technology Category

Application Category

📝 Abstract
Scientific idea generation is a cornerstone of autonomous knowledge discovery, yet the iterative evolution required to transform initial concepts into high-quality research proposals remains a formidable challenge for Large Language Models (LLMs). Existing Reinforcement Learning (RL) paradigms often rely on rubric-based scalar rewards that provide global quality scores but lack actionable granularity. Conversely, language-based refinement methods are typically confined to inference-time prompting, targeting models that are not explicitly optimized to internalize such critiques. To bridge this gap, we propose \textbf{EvoIdeator}, a framework that facilitates the evolution of scientific ideas by aligning the RL training objective with \textbf{checklist-grounded feedback}. EvoIdeator leverages a structured judge model to generate two synergistic signals: (1) \emph{lexicographic rewards} for multi-dimensional optimization, and (2) \emph{fine-grained language feedback} that offers span-level critiques regarding grounding, feasibility, and methodological rigor. By integrating these signals into the RL loop, we condition the policy to systematically utilize precise feedback during both optimization and inference. Extensive experiments demonstrate that EvoIdeator, built on Qwen3-4B, significantly outperforms much larger frontier models across key scientific metrics. Crucially, the learned policy exhibits strong generalization to diverse external feedback sources without further fine-tuning, offering a scalable and rigorous path toward self-refining autonomous ideation.
Problem

Research questions and friction points this paper is trying to address.

scientific idea generation
reinforcement learning
fine-grained feedback
iterative evolution
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

checklist-grounded feedback
lexicographic rewards
fine-grained language feedback
reinforcement learning
autonomous ideation