Neural Networks Trained by Weight Permutation are Universal Approximators

๐Ÿ“… 2024-07-01
๐Ÿ›๏ธ Neural Networks
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper investigates whether neural networks retain universal approximation capability under a highly constrained training paradigmโ€”where only weight permutations (without numerical updates) are permitted. Method: Focusing on the approximation of one-dimensional continuous functions, the authors develop an analytical framework integrating combinatorial optimization and probabilistic construction, specifically applied to ReLU feedforward networks. Contribution/Results: They provide the first rigorous proof that, for any continuous function on a compact interval and any desired accuracy, there exists a finite sequence of weight permutations enabling the network to approximate the function arbitrarily well. This result fundamentally departs from conventional gradient-based training relying on continuous parameter updates, establishing theoretical feasibility of universal approximation via purely discrete structural operations. It offers novel insights into neural network optimization mechanisms and lays a foundational theoretical basis for hardware-efficient, permutation-only training methods.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Proves permutation training enables ReLU networks to approximate functions.
Validates permutation method's efficiency in regression tasks.
Suggests permutation training as a tool for analyzing learning behavior.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Permutation-based training without weight modification
Theoretical guarantee for ReLU network approximation
Efficient regression with diverse initializations
๐Ÿ”Ž Similar Papers
No similar papers found.