🤖 AI Summary
This work proposes a multi-agent AI scientist framework endowed with persistent memory and self-evolution capabilities to overcome the limitations of existing AI research systems, which often rely on static, manual workflows and struggle to adaptively improve from historical interactions, thereby repeating failures or overlooking promising directions. The framework orchestrates collaborative, end-to-end scientific discovery through three specialized agents—Researcher, Engineer, and Evolution Manager—integrating large language models with persistent ideation and experimental memory modules. By leveraging code search trajectory analysis and knowledge distillation, it dynamically refines research ideas and experimental designs while enabling effective knowledge reuse. Evaluated on scientific idea generation and code execution success, the system significantly outperforms seven state-of-the-art baselines, with both automatic and human assessments confirming its superior performance in novelty, feasibility, relevance, and clarity.
📝 Abstract
The increasing adoption of Large Language Models (LLMs) has enabled AI scientists to perform complex end-to-end scientific discovery tasks requiring coordination of specialized roles, including idea generation and experimental execution. However, most state-of-the-art AI scientist systems rely on static, hand-designed pipelines and fail to adapt based on accumulated interaction histories. As a result, these systems overlook promising research directions, repeat failed experiments, and pursue infeasible ideas. To address this, we introduce EvoScientist, an evolving multi-agent AI scientist framework that continuously improves research strategies through persistent memory and self-evolution. EvoScientist comprises three specialized agents: a Researcher Agent (RA) for scientific idea generation, an Engineer Agent (EA) for experiment implementation and execution, and an Evolution Manager Agent (EMA) that distills insights from prior interactions into reusable knowledge. EvoScientist contains two persistent memory modules: (i) an ideation memory, which summarizes feasible research directions from top-ranked ideas while recording previously unsuccessful directions; and (ii) an experimentation memory, which captures effective data processing and model training strategies derived from code search trajectories and best-performing implementations. These modules enable the RA and EA to retrieve relevant prior strategies, improving idea quality and code execution success rates over time. Experiments show that EvoScientist outperforms 7 open-source and commercial state-of-the-art systems in scientific idea generation, achieving higher novelty, feasibility, relevance, and clarity via automatic and human evaluation. EvoScientist also substantially improves code execution success rates through multi-agent evolution, demonstrating persistent memory's effectiveness for end-to-end scientific discovery.