🤖 AI Summary
This paper investigates the structural inductive bias of overparameterized deep ReLU networks (depth > 2) in interpolation learning. Focusing on architectures that prepend a linear layer to shallow subnetworks, we develop an analytical framework based on representation cost—defined as the minimal squared ℓ²-norm of network weights—and show that this design substantially reduces the *mixed variation* of learned functions. Consequently, the network implicitly favors structured functions supported on low-dimensional subspaces: those exhibiting restricted variation along orthogonal directions and admitting exact characterization via single- or multi-index models. Theoretically and empirically, we demonstrate that this mechanism aligns learned weights closely with the true latent low-dimensional subspace, achieves near-optimal low-dimensional approximation on multi-index model-generated data, and improves generalization. To our knowledge, this is the first work to rigorously characterize the critical role of input-side linear layers in inducing implicit regularization for deep networks through representation bias.
📝 Abstract
Neural networks often operate in the overparameterized regime, in which there are far more parameters than training samples, allowing the training data to be fit perfectly. That is, training the network effectively learns an interpolating function, and properties of the interpolant affect predictions the network will make on new samples. This manuscript explores how properties of such functions learned by neural networks of depth greater than two layers. Our framework considers a family of networks of varying depths that all have the same capacity but different representation costs. The representation cost of a function induced by a neural network architecture is the minimum sum of squared weights needed for the network to represent the function; it reflects the function space bias associated with the architecture. Our results show that adding additional linear layers to the input side of a shallow ReLU network yields a representation cost favoring functions with low mixed variation - that is, it has limited variation in directions orthogonal to a low-dimensional subspace and can be well approximated by a single- or multi-index model. Such functions may be represented by the composition of a function with low two-layer representation cost and a low-rank linear operator. Our experiments confirm this behavior in standard network training regimes. They additionally show that linear layers can improve generalization and the learned network is well-aligned with the true latent low-dimensional linear subspace when data is generated using a multi-index model.