๐ค AI Summary
This work addresses the limitations of existing universal approximation theories for neural networks, which typically rely on uniform hypercube partitions and struggle to capture the local irregularities of target functions near singularities. To overcome this, the authors propose a task-oriented approximation strategy based on polyhedral decomposition, integrating kernel polynomial constructions with TotikโDitzian-type moduli of continuity. Within each subdomain, ReLU networks are individually tailored to the local geometry and regularity of the function. This approach significantly enhances approximation efficiency and flexibility in regions containing singularities and achieves faster convergence rates for analytic functions compared to conventional uniform partitioning methods.
๐ Abstract
Universal approximation theory offers a foundational framework to verify neural network expressiveness, enabling principled utilization in real-world applications. However, most existing theoretical constructions are established by uniformly dividing the input space into tiny hypercubes without considering the local regularity of the target function. In this work, we investigate the universal approximation capabilities of ReLU networks from a view of polytope decomposition, which offers a more realistic and task-oriented approach compared to current methods. To achieve this, we develop an explicit kernel polynomial method to derive an universal approximation of continuous functions, which is characterized not only by the refined Totik-Ditzian-type modulus of continuity, but also by polytopical domain decomposition. Then, a ReLU network is constructed to approximate the kernel polynomial in each subdomain separately. Furthermore, we find that polytope decomposition makes our approximation more efficient and flexible than existing methods in many cases, especially near singular points of the objective function. Lastly, we extend our approach to analytic functions to reach a higher approximation rate.