🤖 AI Summary
This work addresses the lack of principled exploitation of rescaling symmetries in ReLU neural networks, which often leads to unstable training dynamics. Building upon the path-lifting framework, the authors propose a geometrically motivated rescaling criterion that aligns the kernel in path space with a reference kernel by minimizing this criterion. This approach systematically integrates rescaling symmetry into the optimization process to enhance training efficiency. Notably, it introduces the first conditioning-based rescaling strategy grounded in the geometric structure of path space. Numerical experiments demonstrate that the proposed method significantly accelerates the training of ReLU networks, highlighting the benefits of leveraging geometric insights for symmetry-aware optimization.
📝 Abstract
Despite recent algorithmic advances, we still lack principled ways to leverage the well-documented rescaling symmetries in ReLU neural network parameters. While two properly rescaled weights implement the same function, the training dynamics can be dramatically different. To offer a fresh perspective on exploiting this phenomenon, we build on the recent path-lifting framework, which provides a compact factorization of ReLU networks. We introduce a geometrically motivated criterion to rescale neural network parameters which minimization leads to a conditioning strategy that aligns a kernel in the path-lifting space with a chosen reference. We derive an efficient algorithm to perform this alignment. In the context of random network initialization, we analyze how the architecture and the initialization scale jointly impact the output of the proposed method. Numerical experiments illustrate its potential to speed up training.