Breaking the Simplification Bottleneck in Amortized Neural Symbolic Regression

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency and limited scalability of existing amortized neural symbolic regression methods, which rely on general-purpose algebraic systems like SymPy for expression simplification. To overcome this bottleneck, the authors propose SimpliPy—a lightweight, rule-driven expression simplification engine—and integrate it into the Flash-ANSR framework. SimpliPy achieves over two orders of magnitude speedup over SymPy while preserving simplification quality, substantially improving token efficiency, training throughput, and test-set decontamination capability. On the FastSRB benchmark, the method significantly outperforms amortized approaches such as NeSymReS and E2E, matching the performance of direct optimization methods like PySR. Notably, with increased inference budget, it generates increasingly concise rather than more complex expressions.

Technology Category

Application Category

📝 Abstract
Symbolic regression (SR) aims to discover interpretable analytical expressions that accurately describe observed data. Amortized SR promises to be much more efficient than the predominant genetic programming SR methods, but currently struggles to scale to realistic scientific complexity. We find that a key obstacle is the lack of a fast reduction of equivalent expressions to a concise normalized form. Amortized SR has addressed this by general-purpose Computer Algebra Systems (CAS) like SymPy, but the high computational cost severely limits training and inference speed. We propose SimpliPy, a rule-based simplification engine achieving a 100-fold speed-up over SymPy at comparable quality. This enables substantial improvements in amortized SR, including scalability to much larger training sets, more efficient use of the per-expression token budget, and systematic training set decontamination with respect to equivalent test expressions. We demonstrate these advantages in our Flash-ANSR framework, which achieves much better accuracy than amortized baselines (NeSymReS, E2E) on the FastSRB benchmark. Moreover, it performs on par with state-of-the-art direct optimization (PySR) while recovering more concise instead of more complex expressions with increasing inference budget.
Problem

Research questions and friction points this paper is trying to address.

Symbolic Regression
Amortized Inference
Expression Simplification
Scalability
Normalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

SimpliPy
amortized symbolic regression
expression simplification
neural symbolic regression
Flash-ANSR
🔎 Similar Papers
No similar papers found.