Compelling ReLU Networks to Exhibit Exponentially Many Linear Regions at Initialization and During Training

📅 2023-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
ReLU networks exhibit far fewer linear regions at random initialization than their theoretical exponential upper bound, limiting their capacity to fit even simple tasks. To address this, we propose a differentiable weight reparameterization strategy that, for the first time, provokes an exponential (in depth) number of linear regions *at initialization*. Our method jointly designs piecewise-linear modeling and reparameterization to preserve high expressive power while maintaining end-to-end trainability, and employs a two-stage optimization scheme to enhance training stability. Experiments demonstrate over three orders-of-magnitude improvement in approximation accuracy on 1D convex function fitting; significant gains persist on multidimensional and non-convex tasks. The core contribution is breaking the expressivity bottleneck at initialization, establishing a new paradigm for structure–training co-design in deep ReLU networks.
📝 Abstract
A neural network with ReLU activations may be viewed as a composition of piecewise linear functions. For such networks, the number of distinct linear regions expressed over the input domain has the potential to scale exponentially with depth, but it is not expected to do so when the initial parameters are chosen randomly. Therefore, randomly initialized models are often unnecessarily large, even when approximating simple functions. To address this issue, we introduce a novel training strategy: we first reparameterize the network weights in a manner that forces the network to exhibit a number of linear regions exponential in depth. Training first on our derived parameters provides an initial solution that can later be refined by directly updating the underlying model weights. This approach allows us to learn approximations of convex, one-dimensional functions that are several orders of magnitude more accurate than their randomly initialized counterparts. We further demonstrate how to extend our approach to multidimensional and non convex functions, with similar benefits observed.
Problem

Research questions and friction points this paper is trying to address.

ReLU Networks
Random Initialization
Linear Regions Utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReLU Neural Networks
Parameter Initialization
Linear Segment Increase
🔎 Similar Papers
No similar papers found.