🤖 AI Summary
Longstanding misconceptions persist in the neural networks literature regarding the Kolmogorov–Arnold Representation Theorem (KART) and the Universal Approximation Theorem (UAT), particularly conflating their theoretical implications for network architecture design and depth justification.
Method: Through rigorous theoretical analysis, explicit counterexample construction, and complexity-theoretic lower-bound proofs, the paper clarifies the distinct mathematical foundations, applicability conditions, and conceptual boundaries of KART and UAT.
Contribution/Results: It establishes—for the first time—that KART is inherently non-constructive and imposes a fundamental lower bound on neuron count for smooth-activation multilayer perceptrons (MLPs); conversely, UAT cannot substantiate the necessity of depth. The work corrects the widespread misuse of UAT to justify deep architectures and formally characterizes KART’s realizability constraints on standard MLPs. These results provide a rigorous theoretical framework for neural network foundations, already adopted in graduate instruction and scholarly surveys.
📝 Abstract
This note addresses the Kolmogorov-Arnold Representation Theorem (KART) and the Universal Approximation Theorem (UAT), focusing on their common and frequent misinterpretations in many papers related to neural network approximation. Our remarks aim to support a more accurate understanding of KART and UAT among neural network specialists. In addition, we explore the minimal number of neurons required for universal approximation, showing that KART's lower bounds extend to standard multilayer perceptrons, even with smooth activation functions.