š¤ AI Summary
This paper addresses the challenges of high dimensionality and prohibitive computational cost in modeling weight uncertainty for neural networks. We propose FiD-GPāa Flow-induced Diagonal Gaussian Process framework for low-dimensional uncertainty compression. FiD-GP employs a compact induced weight matrix to perform a single low-rank projection, integrates a normalizing flow prior with spectral regularization to enhance uncertainty expressivity and ensure projection stability, and introduces feature-gradient geometric alignment and spectral alignment to theoretically guarantee out-of-distribution (OOD) detection performance. Experiments across multiple benchmarks demonstrate that FiD-GP reduces Bayesian training cost by 2ā3 orders of magnitude, compresses model parameters by 51%, and shrinks model size by 75%, while maintaining state-of-the-art predictive accuracy and uncertainty calibration.
š Abstract
We present Flow-Induced Diagonal Gaussian Processes (FiD-GP), a compression framework that incorporates a compact inducing weight matrix to project a neural network's weight uncertainty into a lower-dimensional subspace. Critically, FiD-GP relies on normalising-flow priors and spectral regularisations to augment its expressiveness and align the inducing subspace with feature-gradient geometry through a numerically stable projection mechanism objective. Furthermore, we demonstrate how the prediction framework in FiD-GP can help to design a single-pass projection for Out-of-Distribution (OoD) detection. Our analysis shows that FiD-GP improves uncertainty estimation ability on various tasks compared with SVGP-based baselines, satisfies tight spectral residual bounds with theoretically guaranteed OoD detection, and significantly compresses the neural network's storage requirements at the cost of increased inference computation dependent on the number of inducing weights employed. Specifically, in a comprehensive empirical study spanning regression, image classification, semantic segmentation, and out-of-distribution detection benchmarks, it cuts Bayesian training cost by several orders of magnitude, compresses parameters by roughly 51%, reduces model size by about 75%, and matches state-of-the-art accuracy and uncertainty estimation.