A Dual Certificate Approach to Sparsity in Infinite-Width Shallow Neural Networks

πŸ“… 2026-03-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work investigates the sparsity of solutions in infinitely wide shallow ReLU neural networks under total variation (TV) regularization. By formulating the training problem as a measure optimization over the unit sphere and leveraging convex duality together with the geometry of hyperplane arrangements, the authors uncover a piecewise-linear structure of the dual certificate in weight space and introduce the notion of β€œdual regions.” The main contributions include proving that optimal solutions are finite Dirac measures whose support size is determined by the data geometry; establishing that, under low noise and small regularization parameters, the sparse structure remains invariant and both locations and amplitudes of the solution converge at a linear rate; and providing theoretical guarantees for the uniqueness and perturbation stability of these sparse solutions.

Technology Category

Application Category

πŸ“ Abstract
In this paper, we study total variation (TV)-regularized training of infinite-width shallow ReLU neural networks, formulated as a convex optimization problem over measures on the unit sphere. Our approach leverages the duality theory of TV-regularized optimization problems to establish rigorous guarantees on the sparsity of the solutions to the training problem. Our analysis further characterizes how and when this sparsity persists in a low noise regime and for small regularization parameter. The key observation that motivates our analysis is that, for ReLU activations, the associated dual certificate is piecewise linear in the weight space. Its linearity regions, which we name dual regions, are determined by the activation patterns of the data via the induced hyperplane arrangement. Taking advantage of this structure, we prove that, on each dual region, the dual certificate admits at most one extreme value. As a consequence, the support of any minimizer is finite, and its cardinality can be bounded from above by a constant depending only on the geometry of the data-induced hyperplane arrangement. Then, we further investigate sufficient conditions ensuring uniqueness of such sparse solution. Finally, under a suitable non-degeneracy condition on the dual certificate along the boundaries of the dual regions, we prove that in the presence of low label noise and for small regularization parameter, solutions to the training problem remain sparse with the same number of Dirac deltas. Additionally, their location and the amplitudes converge, and, in case the locations lie in the interior of a dual region, the convergence happens with a rate that depends linearly on the noise and the regularization parameter.
Problem

Research questions and friction points this paper is trying to address.

sparsity
infinite-width neural networks
total variation regularization
ReLU activation
dual certificate
Innovation

Methods, ideas, or system contributions that make the work stand out.

dual certificate
sparsity
infinite-width neural networks
total variation regularization
ReLU activation
πŸ”Ž Similar Papers
No similar papers found.
L
Leonardo Del Grande
Department of Applied Mathematics, University of Twente, 7500AE Enschede, The Netherlands
Christoph Brune
Christoph Brune
Applied Mathematics, University of Twente
MathematicsInverse ProblemsMedical ImagingDeep Learning
M
Marcello Carioni
Department of Applied Mathematics, University of Twente, 7500AE Enschede, The Netherlands