In-Context Symbolic Regression for Robustness-Improved Kolmogorov-Arnold Networks

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor robustness of Kolmogorov–Arnold Networks (KANs) in symbolic regression, which stems from sensitivity to initialization and the neglect of interactions between local operator replacements and global structural consistency. To overcome these limitations, we propose a context-aware symbolic regression framework that integrates end-to-end loss-guided operator substitution, balancing local accuracy with global coherence. Our approach introduces contextual mechanisms into KAN-based symbolic extraction for the first time, featuring two novel methods: Greedy Contextual Symbolic Regression (GSR) and Gated Matching Pursuit (GMP). These combine differentiable gated sparse operator layers with greedy fine-tuning strategies, followed by discrete post-processing. Experiments demonstrate that our method reduces median MSE by up to 99.8% on OFAT benchmarks and recovers analytical expressions that significantly outperform existing baselines in both predictive error and qualitative consistency.

Technology Category

Application Category

📝 Abstract
Symbolic regression aims to replace black-box predictors with concise analytical expressions that can be inspected and validated in scientific machine learning. Kolmogorov-Arnold Networks (KANs) are well suited to this goal because each connection between adjacent units (an "edge") is parametrised by a learnable univariate function that can, in principle, be replaced by a symbolic operator. In practice, however, symbolic extraction is a bottleneck: the standard KAN-to-symbol approach fits operators to each learned edge function in isolation, making the discrete choice sensitive to initialisation and non-convex parameter fitting, and ignoring how local substitutions interact through the full network. We study in-context symbolic regression for operator extraction in KANs, and present two complementary instantiations. Greedy in-context Symbolic Regression (GSR) performs greedy, in-context selection by choosing edge replacements according to end-to-end loss improvement after brief fine-tuning. Gated Matching Pursuit (GMP) amortises this in-context selection by training a differentiable gated operator layer that places an operator library behind sparse gates on each edge; after convergence, gates are discretised (optionally followed by a short in-context greedy refinement pass). We quantify robustness via one-factor-at-a-time (OFAT) hyper-parameter sweeps and assess both predictive error and qualitative consistency of recovered formulas. Across several experiments, greedy in-context symbolic regression achieves up to 99.8% reduction in median OFAT test MSE.
Problem

Research questions and friction points this paper is trying to address.

Symbolic Regression
Kolmogorov-Arnold Networks
Robustness
Operator Extraction
In-Context Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context symbolic regression
Kolmogorov-Arnold Networks
greedy symbolic regression
gated matching pursuit
symbolic extraction
🔎 Similar Papers
No similar papers found.