In Context Learning and Reasoning for Symbolic Regression with Large Language Models

📅 2024-10-22
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Symbolic regression—discovering concise, accurate, and interpretable mathematical equations from data—remains challenging for large language models (LLMs) due to their limited grounding in scientific reasoning and symbolic manipulation. Method: We propose a closed-loop iterative reasoning framework that integrates scientific context modeling and chain-of-thought prompting within a scratchpad mechanism, enabling GPT-4 to collaboratively generate candidate expressions using data, historical formulas, and physical priors. Candidates undergo symbolic verification via SymPy and numerical optimization via SciPy, with feedback-driven iterative refinement. Contribution/Results: This work achieves the first end-to-end integration of natural-language scientific knowledge with verifiable symbolic optimization. Our method successfully recovers five canonical physics equations and generates plausible expressions on unseen datasets. Incorporating contextual grounding and scratchpad reasoning significantly improves accuracy, demonstrating that LLMs can serve as effective auxiliary reasoning engines for interpretable scientific modeling.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are transformer-based machine learning models that have shown remarkable performance in tasks for which they were not explicitly trained. Here, we explore the potential of LLMs to perform symbolic regression -- a machine-learning method for finding simple and accurate equations from datasets. We prompt GPT-4 to suggest expressions from data, which are then optimized and evaluated using external Python tools. These results are fed back to GPT-4, which proposes improved expressions while optimizing for complexity and loss. Using chain-of-thought prompting, we instruct GPT-4 to analyze the data, prior expressions, and the scientific context (expressed in natural language) for each problem before generating new expressions. We evaluated the workflow in rediscovery of five well-known scientific equations from experimental data, and on an additional dataset without a known equation. GPT-4 successfully rediscovered all five equations, and in general, performed better when prompted to use a scratchpad and consider scientific context. We also demonstrate how strategic prompting improves the model's performance and how the natural language interface simplifies integrating theory with data. Although this approach does not outperform established SR programs where target equations are more complex, LLMs can nonetheless iterate toward improved solutions while following instructions and incorporating scientific context in natural language.
Problem

Research questions and friction points this paper is trying to address.

Exploring LLMs for symbolic regression tasks.
Using GPT-4 to rediscover scientific equations from data.
Improving model performance with strategic prompting and context.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs perform symbolic regression via GPT-4 prompting.
Chain-of-thought prompting enhances expression generation.
Natural language integrates theory with data effectively.
🔎 Similar Papers
No similar papers found.
S
Samiha Sharlin
Department of Chemical, Biochemical, and Environmental Engineering, University of Maryland Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250
Tyler R. Josephson
Tyler R. Josephson
Assistant Professor, Chemical, Biochemical, and Environmental Engineering
AI & Theory-Oriented Molecular Science