🤖 AI Summary
The capability of large language models (LLMs) to transform ambiguous scientific ideas into structured research plans—a critical yet unexplored aspect of scientific discovery—remains poorly characterized.
Method: We introduce the Idea2Plan task and establish Idea2Plan Bench, the first benchmark for scientific planning, comprising 200 carefully selected ICML 2025 papers. We further propose JudgeEval, a comprehensive evaluation framework integrating human-defined rubrics, expert annotations, and automated scoring to quantitatively assess both LLMs’ research planning proficiency and their self-assessment accuracy.
Contribution/Results: Experiments reveal that GPT-5 series models achieve the strongest performance, yet substantial gaps persist across key dimensions—including problem decomposition, methodology specification, and feasibility analysis. This work provides the first systematic characterization of LLM capabilities in the upstream phase of scientific discovery, delivering a standardized benchmark and methodological foundation for developing autonomous scientific agents.
📝 Abstract
Large language models (LLMs) have demonstrated significant potential to accelerate scientific discovery as valuable tools for analyzing data, generating hypotheses, and supporting innovative approaches in various scientific fields. In this work, we investigate how LLMs can handle the transition from conceptual research ideas to well-structured research plans. Effective research planning not only supports scientists in advancing their research but also represents a crucial capability for the development of autonomous research agents. Despite its importance, the field lacks a systematic understanding of LLMs' research planning capability. To rigorously measure this capability, we introduce the Idea2Plan task and Idea2Plan Bench, a benchmark built from 200 ICML 2025 Spotlight and Oral papers released after major LLM training cutoffs. Each benchmark instance includes a research idea and a grading rubric capturing the key components of valid plans. We further propose Idea2Plan JudgeEval, a complementary benchmark to assess the reliability of LLM-based judges against expert annotations. Experimental results show that GPT-5 and GPT-5-mini achieve the strongest performance on the benchmark, though substantial headroom remains for future improvement. Our study provides new insights into LLMs' capability for research planning and lay the groundwork for future progress.