🤖 AI Summary
This study investigates the capability of large language models (LLMs) to solve cutting-edge AI research problems using only their pretrained parametric knowledge—without fine-tuning, retrieval augmentation, or external tools—to disentangle reasoning from memorization. We construct the first internal-knowledge-only benchmark for autonomous scientific problem solving, grounded in the problem statements of 1,214 high-quality papers from ICLR 2025. Our method introduces a three-dimensional evaluation framework—success, reproducibility, and novelty—integrating domain-specialized solving agents, iterative critique mechanisms, and an LLM-as-a-judge paradigm, augmented by structured scoring and human verification. Results show that while LLMs can reproduce known solutions and occasionally generate novel insights, their problem-solving performance is fragile and highly sensitive to problem phrasing. This work provides the first systematic empirical validation of LLMs’ potential—and fundamental limitations—as autonomous scientific problem solvers.
📝 Abstract
Large language models (LLMs) demonstrate impressive capabilities across a wide range of tasks, yet it remains unclear whether such success reflects genuine reasoning or sophisticated recall. We introduce AInstein, a framework for testing whether LLMs can generate valid solutions to AI research problems using only their pretrained parametric knowledge -- without domain-specific fine-tuning, retrieval augmentation, or other external aids. Our approach extracts distilled problem statements from high-quality ICLR 2025 submissions, then tasks specialized solver agents with proposing and refining technical solutions through iterative critique loops, mimicking the cycles of proposal, review, and revision central to scientific inquiry. We evaluate AInstein on 1,214 ICLR papers stratified by acceptance tier (Oral, Spotlight, Poster), using an LLM-as-a-judge paradigm guided by a structured rubric, complemented by targeted manual checks. Performance is assessed with three metrics: Success Rate (does the solution address the problem?), Rediscovery (does it align with human-proposed methods?), and Novelty (does it yield valid, original approaches?). Our results reveal that while LLMs can rediscover feasible solutions and occasionally propose creative alternatives, their problem-solving ability remains fragile and highly sensitive to framing. These findings provide the first large-scale evidence on the extent to which LLMs can act as autonomous scientific problem-solvers, highlighting both their latent potential and their current limitations.