Approximating Full Conformal Prediction for Neural Network Regression with Gauss-Newton Influence

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural network regression faces dual challenges of poor prediction interval calibration and low statistical efficiency—e.g., Laplace approximation fails under model misspecification, while split conformal prediction sacrifices efficiency via data splitting. Method: We propose a post-hoc conformal inference method that requires no holdout data. Our approach integrates the Gauss–Newton influence function with linearized neural networks, enabling efficient local parameter-space perturbations to approximate full retraining—thus achieving scalable approximation to full conformal prediction. Contribution/Results: By eliminating repeated training and data partitioning, our method maintains high statistical efficiency while yielding tighter, better-calibrated prediction intervals. Empirically, it significantly outperforms split conformal prediction on standard regression benchmarks and bounding-box localization in object detection, enhancing both reliability and practicality of uncertainty quantification in safety-critical applications.

Technology Category

Application Category

📝 Abstract
Uncertainty quantification is an important prerequisite for the deployment of deep learning models in safety-critical areas. Yet, this hinges on the uncertainty estimates being useful to the extent the prediction intervals are well-calibrated and sharp. In the absence of inherent uncertainty estimates (e.g. pretrained models predicting only point estimates), popular approaches that operate post-hoc include Laplace's method and split conformal prediction (split-CP). However, Laplace's method can be miscalibrated when the model is misspecified and split-CP requires sample splitting, and thus comes at the expense of statistical efficiency. In this work, we construct prediction intervals for neural network regressors post-hoc without held-out data. This is achieved by approximating the full conformal prediction method (full-CP). Whilst full-CP nominally requires retraining the model for every test point and candidate label, we propose to train just once and locally perturb model parameters using Gauss-Newton influence to approximate the effect of retraining. Coupled with linearization of the network, we express the absolute residual nonconformity score as a piecewise linear function of the candidate label allowing for an efficient procedure that avoids the exhaustive search over the output space. On standard regression benchmarks and bounding box localization, we show the resulting prediction intervals are locally-adaptive and often tighter than those of split-CP.
Problem

Research questions and friction points this paper is trying to address.

Estimating uncertainty for neural network regression without held-out data
Approximating full conformal prediction using Gauss-Newton influence
Improving prediction interval calibration and sharpness post-hoc
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Gauss-Newton influence for parameter perturbation
Approximates full conformal prediction post-hoc
Linearizes network for efficient residual scoring
🔎 Similar Papers
No similar papers found.