🤖 AI Summary
Scientific idea generation is inherently multi-objective and open-ended, requiring simultaneous optimization of novelty and empirical rigor—yet large language models (LLMs) exhibit inconsistent creative performance and poorly understood mechanisms in this domain. This paper presents a systematic review of LLM-driven scientific ideation methods, introducing the first integrative taxonomy grounded in Boden’s creativity framework (combinatorial, exploratory, transformative) and Rhodes’ 4Ps model (person, process, product, press). We categorize prevailing approaches into five core technical paradigms: external knowledge augmentation, prompt engineering, inference-time scaling, multi-agent collaboration, and parameter-level adaptation. Our analysis maps each paradigm onto distinct creativity levels and sources, clarifying current capability boundaries. We further propose a co-optimization direction that jointly enhances ideational novelty and scientific validity—offering theoretical foundations and methodological guidance for developing trustworthy scientific AI.
📝 Abstract
Scientific idea generation lies at the heart of scientific discovery and has driven human progress-whether by solving unsolved problems or proposing novel hypotheses to explain unknown phenomena. Unlike standard scientific reasoning or general creative generation, idea generation in science is a multi-objective and open-ended task, where the novelty of a contribution is as essential as its empirical soundness. Large language models (LLMs) have recently emerged as promising generators of scientific ideas, capable of producing coherent and factual outputs with surprising intuition and acceptable reasoning, yet their creative capacity remains inconsistent and poorly understood. This survey provides a structured synthesis of methods for LLM-driven scientific ideation, examining how different approaches balance creativity with scientific soundness. We categorize existing methods into five complementary families: External knowledge augmentation, Prompt-based distributional steering, Inference-time scaling, Multi-agent collaboration, and Parameter-level adaptation. To interpret their contributions, we employ two complementary frameworks: Boden's taxonomy of Combinatorial, Exploratory and Transformational creativity to characterize the level of ideas each family expected to generate, and Rhodes'4Ps framework-Person, Process, Press, and Product-to locate the aspect or source of creativity that each method emphasizes. By aligning methodological advances with creativity frameworks, this survey clarifies the state of the field and outlines key directions toward reliable, systematic, and transformative applications of LLMs in scientific discovery.