🤖 AI Summary
Existing large language models (LLMs) in scientific equation discovery primarily serve as passive equation proposers within genetic programming frameworks, lacking autonomous scientific reasoning capabilities. Method: We propose the first end-to-end trainable AI scientist framework that elevates LLMs to active research agents endowed with long-horizon planning, code execution, and closed-loop feedback-driven optimization. Our approach integrates a code interpreter as a tool-calling interface for data parsing, symbolic equation generation, and evaluation, while synergistically combining genetic programming priors with reinforcement learning for policy optimization. Results: Evaluated across four scientific domains—physics, chemistry, biology, and engineering—our method achieves 6–35% improvements over baselines in equation discovery accuracy, significantly enhancing noise robustness, cross-domain generalization, and symbolic fidelity. This work marks the first paradigm shift enabling LLMs to transition from prompt-driven to goal-driven scientific discovery.
📝 Abstract
Recently, Large Language Models (LLMs) have been applied to scientific equation discovery, leveraging their embedded scientific knowledge for hypothesis generation. However, current methods typically confine LLMs to the role of an equation proposer within search algorithms like genetic programming. In this paper, we present SR-Scientist, a framework that elevates the LLM from a simple equation proposer to an autonomous AI scientist that writes code to analyze data, implements the equation as code, submits it for evaluation, and optimizes the equation based on experimental feedback. Specifically, we wrap the code interpreter into a set of tools for data analysis and equation evaluation. The agent is instructed to optimize the equation by utilizing these tools over a long horizon with minimal human-defined pipelines. Empirical results show that SR-Scientist outperforms baseline methods by an absolute margin of 6% to 35% on datasets covering four science disciplines. Additionally, we demonstrate our method's robustness to noise, the generalization of the discovered equations to out-of-domain data, and their symbolic accuracy. Furthermore, we develop an end-to-end reinforcement learning framework to enhance the agent's capabilities.