🤖 AI Summary
This work addresses the rational approximation of smooth functions under the $mathcal{C}^1$ norm—i.e., simultaneous uniform approximation of both the function and its first-order derivatives. We propose a unified approximation framework integrating rational functions and rational neural networks. For the first time, we derive an explicit upper bound on the $mathcal{C}^1$-approximation error, quantitatively characterizing the convergence rate in terms of network width, depth, and rational function degree. Our theoretical analysis is innovatively extended to two classes of interpretable neural architectures—EQL$^div$ and ParFam—thereby providing rigorous theoretical foundations for symbolic regression in scientific discovery. Empirical results demonstrate that our method achieves high-fidelity reconstruction of both target functions and their derivatives in physics-informed modeling, significantly outperforming conventional polynomial approximations and ReLU-based neural networks.
📝 Abstract
We show that suitably regular functions can be approximated in the $mathcal{C}^1$-norm both with rational functions and rational neural networks, including approximation rates with respect to width and depth of the network, and degree of the rational functions. As consequence of our results, we further obtain $mathcal{C}^1$-approximation results for rational neural networks with the $ ext{EQL}^÷$ and ParFam architecture, both of which are important in particular in the context of symbolic regression for physical law learning.