π€ AI Summary
This work addresses the deep neural network (DNN) approximation of high-dimensional symmetric Korobov functions, aiming to mitigate the curse of dimensionality. We propose an explicitly constructed class of permutation-invariant deep neural networks whose architecture intrinsically respects the spatial symmetry of Korobov functions. Under the Korobov space assumption, we establish rigorous upper bounds on both approximation and generalization errorsβboth scaling polynomially, rather than exponentially, in the input dimension. To our knowledge, this is the first result achieving a polynomial dependence of the leading constant in the convergence rate on dimension, thereby yielding dimension-independent approximation guarantees. The key innovation lies in embedding functional symmetry priors directly into the network architecture and leveraging this inductive bias to derive tight, dimension-explicit error bounds. Our framework provides the first theoretically sound and constructively implementable DNN approach for learning high-dimensional symmetric functions.
π Abstract
Deep neural networks have been widely used as universal approximators for functions with inherent physical structures, including permutation symmetry. In this paper, we construct symmetric deep neural networks to approximate symmetric Korobov functions and prove that both the convergence rate and the constant prefactor scale at most polynomially with respect to the ambient dimension. This represents a substantial improvement over prior approximation guarantees that suffer from the curse of dimensionality. Building on these approximation bounds, we further derive a generalization-error rate for learning symmetric Korobov functions whose leading factors likewise avoid the curse of dimensionality.