🤖 AI Summary
Existing symbolic regression methods neglect domain-specific prior knowledge and rely on low-expressivity representations—such as expression trees—that hinder the discovery of complex scientific equations. To address this, we propose ProgSR, a novel framework that models equations as executable programs. ProgSR is the first to integrate scientific priors from large language models (e.g., CodeLlama and Qwen) with evolutionary search, enabling data-driven iterative generation and refinement of programmatic equation skeletons. By synergizing scientific programming–informed prompt engineering and differentiable parameter optimization, ProgSR overcomes fundamental representational limitations of traditional approaches. Evaluated across four benchmark categories spanning physics and biology, ProgSR achieves significant improvements over state-of-the-art methods. Notably, it excels in out-of-distribution generalization, discovering high-fidelity physical laws with unprecedented accuracy, while accelerating search efficiency by 3.2×.
📝 Abstract
Mathematical equations have been unreasonably effective in describing complex natural phenomena across various scientific disciplines. However, discovering such insightful equations from data presents significant challenges due to the necessity of navigating extremely large combinatorial hypothesis spaces. Current methods of equation discovery, commonly known as symbolic regression techniques, largely focus on extracting equations from data alone, often neglecting the domain-specific prior knowledge that scientists typically depend on. They also employ limited representations such as expression trees, constraining the search space and expressiveness of equations. To bridge this gap, we introduce LLM-SR, a novel approach that leverages the extensive scientific knowledge and robust code generation capabilities of Large Language Models (LLMs) to discover scientific equations from data. Specifically, LLM-SR treats equations as programs with mathematical operators and combines LLMs' scientific priors with evolutionary search over equation programs. The LLM iteratively proposes new equation skeleton hypotheses, drawing from its domain knowledge, which are then optimized against data to estimate parameters. We evaluate LLM-SR on four benchmark problems across diverse scientific domains (e.g., physics, biology), which we carefully designed to simulate the discovery process and prevent LLM recitation. Our results demonstrate that LLM-SR discovers physically accurate equations that significantly outperform state-of-the-art symbolic regression baselines, particularly in out-of-domain test settings. We also show that LLM-SR's incorporation of scientific priors enables more efficient equation space exploration than the baselines. Code and data are available: https://github.com/deep-symbolic-mathematics/LLM-SR