🤖 AI Summary
This work investigates the joint uniform approximation capability of GELU-activated feedforward neural networks for multivariate polynomials, exponential functions, and reciprocal functions—along with their arbitrary-order derivatives. Addressing the lack of theoretical frameworks that simultaneously control high-order derivatives, we establish, for the first time, explicit uniform error bounds for both the target function and all its prescribed-order derivatives. We propose a constructive multiplicative approximation scheme and systematically extend it to division and exponentiation, ensuring global boundedness of all derivative orders. Through layerwise derivative control and asymptotic analysis at infinity, we derive explicit upper bounds on network width, weight magnitudes, and characterize the network’s behavior at infinity. Our results show that, for these fundamental function classes, the uniform approximation error in derivatives decays exponentially with network width.
📝 Abstract
We derive an approximation error bound that holds simultaneously for a function and all its derivatives up to any prescribed order. The bounds apply to elementary functions, including multivariate polynomials, the exponential function, and the reciprocal function, and are obtained using feedforward neural networks with the Gaussian Error Linear Unit (GELU) activation. In addition, we report the network size, weight magnitudes, and behavior at infinity. Our analysis begins with a constructive approximation of multiplication, where we prove the simultaneous validity of error bounds over domains of increasing size for a given approximator. Leveraging this result, we obtain approximation guarantees for division and the exponential function, ensuring that all higher-order derivatives of the resulting approximators remain globally bounded.