A Structure-Guided Gauss-Newton Method for Shallow ReLU Neural Network

📅 2024-04-07
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Standard training algorithms for shallow ReLU neural networks suffer from slow convergence and limited accuracy when solving least-squares problems involving discontinuous or sharply transitional functions. To address this, we propose the Structure-guided Gauss–Newton method (SgGN). SgGN decouples optimization of nonlinear hidden-layer parameters from linear output-layer parameters and alternates between them. It introduces, for the first time, a structure-preserving Gauss–Newton matrix for ReLU networks—guaranteed positive definite and invertible without Levenberg–Marquardt damping. Efficiency is achieved via analytical Jacobian computation, structured parameter decomposition, and exact solution of linear subproblems. Experiments demonstrate that SgGN achieves significantly improved stability and accuracy over standard optimizers—including SGD and Adam—on benchmark tasks involving discontinuous and multiscale function approximation.

Technology Category

Application Category

📝 Abstract
In this paper, we propose a structure-guided Gauss-Newton (SgGN) method for solving least squares problems using a shallow ReLU neural network. The method effectively takes advantage of both the least squares structure and the neural network structure of the objective function. By categorizing the weights and biases of the hidden and output layers of the network as nonlinear and linear parameters, respectively, the method iterates back and forth between the nonlinear and linear parameters. The nonlinear parameters are updated by a damped Gauss-Newton method and the linear ones are updated by a linear solver. Moreover, at the Gauss-Newton step, a special form of the Gauss-Newton matrix is derived for the shallow ReLU neural network and is used for efficient iterations. It is shown that the corresponding mass and Gauss-Newton matrices in the respective linear and nonlinear steps are symmetric and positive definite under reasonable assumptions. Thus, the SgGN method naturally produces an effective search direction without the need of additional techniques like shifting in the Levenberg-Marquardt method to achieve invertibility of the Gauss-Newton matrix. The convergence and accuracy of the method are demonstrated numerically for several challenging function approximation problems, especially those with discontinuities or sharp transition layers that pose significant challenges for commonly used training algorithms in machine learning.
Problem

Research questions and friction points this paper is trying to address.

Solves least squares using shallow ReLU networks
Optimizes nonlinear and linear parameters iteratively
Handles discontinuous or sharp transition functions effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-guided Gauss-Newton for shallow ReLU networks
Separates nonlinear and linear parameter updates
Efficient Gauss-Newton matrix for ReLU iterations
🔎 Similar Papers
No similar papers found.
Z
Zhiqiang Cai
Department of Mathematics, Purdue University, West Lafayette, IN
Tong Ding
Tong Ding
PhD student in Computer Science, Harvard University
Representation LearningComputer VisionMultimodal LearningMachine Learning for Health
M
Min Liu
School of Mechanical Engineering, Purdue University, West Lafayette, IN
X
Xinyu Liu
Department of Mathematics, Purdue University, West Lafayette, IN
Jianlin Xia
Jianlin Xia
Department of Mathematics, Purdue University, West Lafayette, IN