Can Test-time Computation Mitigate Memorization Bias in Neural Symbolic Regression?

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformer-based neural symbolic regression (NSR) exhibits pronounced memorization bias—over-reproducing expressions seen in training data while struggling to compose and numerically verify novel expressions, leading to sharp generalization degradation under large-scale variables. Method: This work introduces, for the first time, a quantitative characterization of this bias and proposes a novel “test-time information injection” paradigm that integrates numerical feedback with symbolic expression constraints. The approach combines pretrained Transformers, synthetic-data evaluation, composability-theoretic analysis, and dynamic test-time prompting. Contribution/Results: Experiments demonstrate a 47% reduction in memorization bias; however, overall performance gains remain limited. This reveals a fundamental decoupling between bias mitigation and generalization improvement in NSR—challenging the assumption that reducing memorization directly enhances generalization. The findings establish a new conceptual boundary for understanding generalization mechanisms in neural-symbolic models.

Technology Category

Application Category

📝 Abstract
Symbolic regression aims to discover mathematical equations that fit given numerical data. It has been applied in various fields of scientific research, such as producing human-readable expressions that explain physical phenomena. Recently, Neural symbolic regression (NSR) methods that involve Transformers pre-trained on large-scale synthetic datasets have gained attention. While these methods offer advantages such as short inference time, they suffer from low performance, particularly when the number of input variables is large. In this study, we hypothesized that this limitation stems from the memorization bias of Transformers in symbolic regression. We conducted a quantitative evaluation of this bias in Transformers using a synthetic dataset and found that Transformers rarely generate expressions not present in the training data. Additional theoretical analysis reveals that this bias arises from the Transformer's inability to construct expressions compositionally while verifying their numerical validity. We finally examined if tailoring test-time strategies can lead to reduced memorization bias and better performance. We empirically demonstrate that providing additional information to the model at test time can significantly mitigate memorization bias. On the other hand, we also find that reducing memorization bias does not necessarily correlate with improved performance. These findings contribute to a deeper understanding of the limitations of NSR approaches and offer a foundation for designing more robust, generalizable symbolic regression methods. Code is available at https://github.com/Shun-0922/Mem-Bias-NSR .
Problem

Research questions and friction points this paper is trying to address.

Mitigate memorization bias in Neural Symbolic Regression
Evaluate Transformer's bias in generating novel expressions
Test-time strategies to reduce bias without performance loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-time computation reduces memorization bias
Transformers analyze expressions compositionally
Additional test-time information improves performance
🔎 Similar Papers
No similar papers found.
S
Shun Sato
The University of Tokyo
Issei Sato
Issei Sato
University of Tokyo
Machine learning