LLM-Based Scientific Equation Discovery via Physics-Informed Token-Regularized Policy Optimization

๐Ÿ“… 2026-02-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing large language models struggle to balance physical consistency and expression simplicity in symbolic regression and lack dynamic optimization mechanisms driven by search feedback. This work proposes the PiT-PO framework, which, for the first time, integrates physical constraints and token-level structural regularization into reinforcement learningโ€“based policy optimization, enabling dynamic, hierarchical guidance of the equation generation process. The approach allows large language models to adaptively evolve into efficient scientific equation generators, achieving state-of-the-art performance on standard symbolic regression benchmarks. Notably, it successfully discovers a novel turbulence model and outperforms proprietary large models using only a small-scale architecture, significantly enhancing both the accuracy and interpretability of scientific discovery.

Technology Category

Application Category

๐Ÿ“ Abstract
Symbolic regression aims to distill mathematical equations from observational data. Recent approaches have successfully leveraged Large Language Models (LLMs) to generate equation hypotheses, capitalizing on their vast pre-trained scientific priors. However, existing frameworks predominantly treat the LLM as a static generator, relying on prompt-level guidance to steer exploration. This paradigm fails to update the model's internal representations based on search feedback, often yielding physically inconsistent or mathematically redundant expressions. In this work, we propose PiT-PO (Physics-informed Token-regularized Policy Optimization), a unified framework that evolves the LLM into an adaptive generator via reinforcement learning. Central to PiT-PO is a dual-constraint mechanism that rigorously enforces hierarchical physical validity while simultaneously applying fine-grained, token-level penalties to suppress redundant structures. Consequently, PiT-PO aligns LLM to produce equations that are both scientifically consistent and structurally parsimonious. Empirically, PiT-PO achieves state-of-the-art performance on standard benchmarks and successfully discovers novel turbulence models for challenging fluid dynamics problems. We also demonstrate that PiT-PO empowers small-scale models to outperform closed-source giants, democratizing access to high-performance scientific discovery.
Problem

Research questions and friction points this paper is trying to address.

symbolic regression
scientific equation discovery
Large Language Models
physical consistency
mathematical redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Symbolic Regression
Large Language Models
Reinforcement Learning
Physics-Informed Constraints
Token-Regularized Policy Optimization
B
Boxiao Wang
Institute of Automation, Chinese Academy of Sciences
Kai Li
Kai Li
University of Chinese Academy of Sciences & City University of Hong Kong
Computer VisionMultimodal Language ModelRemote Sensing
T
Tianyi Liu
State Key Laboratory of Aerodynamics
C
Chen Li
State Key Laboratory of Aerodynamics
Junzhe Wang
Junzhe Wang
School of Engineering, Westlake University, Hangzhou, Zhejiang, China
Circuit and SystemDigital Chip DesignBrain-machine Interface
Y
Yifan Zhang
Institute of Automation, Chinese Academy of Sciences
J
Jian Cheng
Institute of Automation, Chinese Academy of Sciences