๐ค AI Summary
This paper investigates whether neural networks retain universal approximation capability under a highly constrained training paradigmโwhere only weight permutations (without numerical updates) are permitted.
Method: Focusing on the approximation of one-dimensional continuous functions, the authors develop an analytical framework integrating combinatorial optimization and probabilistic construction, specifically applied to ReLU feedforward networks.
Contribution/Results: They provide the first rigorous proof that, for any continuous function on a compact interval and any desired accuracy, there exists a finite sequence of weight permutations enabling the network to approximate the function arbitrarily well. This result fundamentally departs from conventional gradient-based training relying on continuous parameter updates, establishing theoretical feasibility of universal approximation via purely discrete structural operations. It offers novel insights into neural network optimization mechanisms and lays a foundational theoretical basis for hardware-efficient, permutation-only training methods.