Learning Neural Networks by Neuron Pursuit

๐Ÿ“… 2025-09-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates the dynamical evolution of homogeneous neural network gradient flows near sparse-structured saddle points, revealing their directional convergence behavior. Based on this analysis, we propose โ€œNeuron Pursuitโ€โ€”a greedy algorithm that alternates between neuron proliferation and parameter optimization to progressively construct efficient, sparse network architectures. Crucially, this is the first method to directly leverage local saddle-point dynamics for structural learning paradigm design, achieving both theoretical interpretability and computational tractability. We establish rigorous convergence guarantees under standard assumptions. Empirical evaluations across diverse deep learning tasks demonstrate that Neuron Pursuit consistently outperforms fixed-architecture baselines, simultaneously improving training efficiency, generalization performance, and structural sparsity.

Technology Category

Application Category

๐Ÿ“ Abstract
The first part of this paper studies the evolution of gradient flow for homogeneous neural networks near a class of saddle points exhibiting a sparsity structure. The choice of these saddle points is motivated from previous works on homogeneous networks, which identified the first saddle point encountered by gradient flow after escaping the origin. It is shown here that, when initialized sufficiently close to such saddle points, gradient flow remains near the saddle point for a sufficiently long time, during which the set of weights with small norm remain small but converge in direction. Furthermore, important empirical observations are made on the behavior of gradient descent after escaping these saddle points. The second part of the paper, motivated by these results, introduces a greedy algorithm to train deep neural networks called Neuron Pursuit (NP). It is an iterative procedure which alternates between expanding the network by adding neuron(s) with carefully chosen weights, and minimizing the training loss using this augmented network. The efficacy of the proposed algorithm is validated using numerical experiments.
Problem

Research questions and friction points this paper is trying to address.

Analyzing gradient flow dynamics near sparse saddle points
Proposing Neuron Pursuit algorithm for neural network training
Validating algorithm efficacy through numerical experiments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient flow analysis near saddle points
Greedy Neuron Pursuit iterative training algorithm
Network expansion with carefully chosen weights
๐Ÿ”Ž Similar Papers
No similar papers found.