🤖 AI Summary
This work investigates how ℓ²-regularization (weight decay) implicitly induces low-rank structure in the weight matrices of deep neural networks trained via gradient descent (GD) and gradient flow (GF). Theoretically, we establish an intrinsic unification at steady state among parameter-gradient alignment, inter-layer norm conservation, and low-rank bias—each arising from the implicit preference of ℓ²-regularization. Furthermore, we discover that when task inputs are approximately orthogonal, zero-loss multi-task generalization can be achieved by simply linearly combining the independently trained network weights—without any additional training. This insight yields the first training-free model merging paradigm grounded in input orthogonality. We provide rigorous theoretical proofs of these properties and empirically validate them on both ReLU and linear deep networks.
📝 Abstract
We explore the low-rank structure of the weight matrices in neural networks originating from training with Gradient Descent (GD) and Gradient Flow (GF) with $L2$ regularization (also known as weight decay). We show several properties of GD-trained deep neural networks, induced by $L2$ regularization. In particular, for a stationary point of GD we show alignment of the parameters and the gradient, norm preservation across layers, and low-rank bias: properties previously known in the context of GF solutions. Experiments show that the assumptions made in the analysis only mildly affect the observations. In addition, we investigate a multitask learning phenomenon enabled by $L2$ regularization and low-rank bias. In particular, we show that if two networks are trained, such that the inputs in the training set of one network are approximately orthogonal to the inputs in the training set of the other network, the new network obtained by simply summing the weights of the two networks will perform as well on both training sets as the respective individual networks. We demonstrate this for shallow ReLU neural networks trained by GD, as well as deep linear and deep ReLU networks trained by GF.