Observation Noise and Initialization in Wide Neural Networks

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The Neural Tangent Kernel–Gaussian Process (NTK-GP) model for wide neural networks suffers from two key limitations: it neglects observational noise and lacks flexibility in specifying prior mean functions. Method: We propose two principled improvements: (1) an explicit noise-regularization term, rigorously proven to be equivalent to observational noise modeling in the NTK-GP framework; and (2) an “offset network” architecture that enables arbitrary differentiable prior mean functions without requiring ensemble averaging or kernel matrix inversion. Our approach unifies NTK theory, Gaussian process regression, and gradient-based optimization. Contribution/Results: The method significantly improves predictive accuracy and modeling flexibility across diverse noise levels and network architectures. It provides a more robust and controllable theoretical and practical framework for Bayesian inference in wide neural networks, enabling principled incorporation of domain knowledge via customizable priors and reliable uncertainty quantification under noisy observations.

Technology Category

Application Category

📝 Abstract
Performing gradient descent in a wide neural network is equivalent to computing the posterior mean of a Gaussian Process with the Neural Tangent Kernel (NTK-GP), for a specific choice of prior mean and with zero observation noise. However, existing formulations of this result have two limitations: i) the resultant NTK-GP assumes no noise in the observed target variables, which can result in suboptimal predictions with noisy data; ii) it is unclear how to extend the equivalence to an arbitrary prior mean, a crucial aspect of formulating a well-specified model. To address the first limitation, we introduce a regularizer into the neural network's training objective, formally showing its correspondence to incorporating observation noise into the NTK-GP model. To address the second, we introduce a extit{shifted network} that enables arbitrary prior mean functions. This approach allows us to perform gradient descent on a single neural network, without expensive ensembling or kernel matrix inversion. Our theoretical insights are validated empirically, with experiments exploring different values of observation noise and network architectures.
Problem

Research questions and friction points this paper is trying to address.

Neural Tangent Kernel
Gaussian Process
Model Flexibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient Descent Modifications
Observation Noise Incorporation
Arbitrary Initial Guesses
🔎 Similar Papers
No similar papers found.