🤖 AI Summary
To address the susceptibility of gradient-based optimization to local minima in two-layer neural networks under few-shot function approximation, this paper proposes a generative latent-feature initialization method. Specifically, it employs a deep generative model to learn a prior distribution over hidden-layer weights—replacing conventional random initialization—and integrates latent-space gradient fine-tuning with noise-robust ℓ₂ regularization; the output layer is solved in closed form via linear least squares. The approach preserves model lightness (i.e., few hidden units) while substantially improving generalization accuracy and training stability. Numerical experiments demonstrate consistent superiority over standard initialization schemes across multiple few-data benchmarks. This work establishes a new paradigm for reliable function approximation with shallow networks in data-scarce regimes.
📝 Abstract
We consider the approximation of functions by 2-layer neural networks with a small number of hidden weights based on the squared loss and small datasets. Due to the highly non-convex energy landscape, gradient-based training often suffers from local minima. As a remedy, we initialize the hidden weights with samples from a learned proposal distribution, which we parameterize as a deep generative model. To train this model, we exploit the fact that with fixed hidden weights, the optimal output weights solve a linear equation. After learning the generative model, we refine the sampled weights with a gradient-based post-processing in the latent space. Here, we also include a regularization scheme to counteract potential noise. Finally, we demonstrate the effectiveness of our approach by numerical examples.