Post-Hoc Uncertainty Quantification in Pre-Trained Neural Networks via Activation-Level Gaussian Processes

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for posterior uncertainty quantification in pretrained neural networks—such as Dropout, Bayesian Neural Networks (BNNs), and Laplace Approximation—suffer from underfitting or prohibitive computational cost at scale. To address this, we propose the Gaussian Process on Activations (GAPA) framework, which shifts Gaussian process modeling from weight space to activation space, bypassing complex weight posterior inference while preserving the original model’s predictive mean. GAPA introduces the first neuron-level uncertainty modeling paradigm and provides two calibration strategies: a training-free variant (GAPA-Free) and a learnable variant (GAPA-Variational), integrating empirical kernel learning with variational hyperparameter optimization. Experiments demonstrate that GAPA-Variational outperforms Laplace Approximation on at least one key uncertainty metric across most benchmarks, significantly enhancing both efficiency and accuracy in trustworthy evaluation of large-scale models.

Technology Category

Application Category

📝 Abstract
Uncertainty quantification in neural networks through methods such as Dropout, Bayesian neural networks and Laplace approximations is either prone to underfitting or computationally demanding, rendering these approaches impractical for large-scale datasets. In this work, we address these shortcomings by shifting the focus from uncertainty in the weight space to uncertainty at the activation level, via Gaussian processes. More specifically, we introduce the Gaussian Process Activation function (GAPA) to capture neuron-level uncertainties. Our approach operates in a post-hoc manner, preserving the original mean predictions of the pre-trained neural network and thereby avoiding the underfitting issues commonly encountered in previous methods. We propose two methods. The first, GAPA-Free, employs empirical kernel learning from the training data for the hyperparameters and is highly efficient during training. The second, GAPA-Variational, learns the hyperparameters via gradient descent on the kernels, thus affording greater flexibility. Empirical results demonstrate that GAPA-Variational outperforms the Laplace approximation on most datasets in at least one of the uncertainty quantification metrics.
Problem

Research questions and friction points this paper is trying to address.

Addresses underfitting and computational demands in uncertainty quantification.
Shifts focus to activation-level uncertainty using Gaussian processes.
Introduces GAPA for efficient post-hoc uncertainty in pre-trained networks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shifts focus to activation-level uncertainty via Gaussian processes
Introduces Gaussian Process Activation function (GAPA)
Proposes GAPA-Free and GAPA-Variational for efficient uncertainty quantification
🔎 Similar Papers
2024-03-11arXiv.orgCitations: 6