Generative Feature Training of Thin 2-Layer Networks

📅 2024-11-11
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the susceptibility of gradient-based optimization to local minima in two-layer neural networks under few-shot function approximation, this paper proposes a generative latent-feature initialization method. Specifically, it employs a deep generative model to learn a prior distribution over hidden-layer weights—replacing conventional random initialization—and integrates latent-space gradient fine-tuning with noise-robust ℓ₂ regularization; the output layer is solved in closed form via linear least squares. The approach preserves model lightness (i.e., few hidden units) while substantially improving generalization accuracy and training stability. Numerical experiments demonstrate consistent superiority over standard initialization schemes across multiple few-data benchmarks. This work establishes a new paradigm for reliable function approximation with shallow networks in data-scarce regimes.

Technology Category

Application Category

📝 Abstract
We consider the approximation of functions by 2-layer neural networks with a small number of hidden weights based on the squared loss and small datasets. Due to the highly non-convex energy landscape, gradient-based training often suffers from local minima. As a remedy, we initialize the hidden weights with samples from a learned proposal distribution, which we parameterize as a deep generative model. To train this model, we exploit the fact that with fixed hidden weights, the optimal output weights solve a linear equation. After learning the generative model, we refine the sampled weights with a gradient-based post-processing in the latent space. Here, we also include a regularization scheme to counteract potential noise. Finally, we demonstrate the effectiveness of our approach by numerical examples.
Problem

Research questions and friction points this paper is trying to address.

Approximating functions with 2-layer neural networks efficiently
Overcoming local minima in gradient-based training
Using generative models for improved weight initialization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Initialize hidden weights via deep generative model
Refine weights with gradient-based post-processing
Regularize to counteract potential noise effects
🔎 Similar Papers
No similar papers found.