On the Interplay of Priors and Overparametrization in Bayesian Neural Network Posteriors

๐Ÿ“… 2026-03-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Posterior inference in Bayesian neural networks is often hindered by symmetries, non-identifiability, and semantic ambiguity in the prior. This work systematically investigates the interplay between over-parameterization and the prior, revealing how their synergy reshapes the geometry of the posterior distribution. Through large-scale posterior sampling and geometric analysis, we identify three key phenomena induced by over-parameterization: posterior balancing, weight redistribution along equiprobable manifolds, and enhanced alignment with the prior. These findings demonstrate that appropriate over-parameterization yields a posterior distribution with clearer structure and stronger prior consistency, substantially improving the interpretability and practical utility of posterior inference in Bayesian neural networks.

Technology Category

Application Category

๐Ÿ“ Abstract
Bayesian neural network (BNN) posteriors are often considered impractical for inference, as symmetries fragment them, non-identifiabilities inflate dimensionality, and weight-space priors are seen as meaningless. In this work, we study how overparametrization and priors together reshape BNN posteriors and derive implications allowing us to better understand their interplay. We show that redundancy introduces three key phenomena that fundamentally reshape the posterior geometry: balancedness, weight reallocation on equal-probability manifolds, and prior conformity. We validate our findings through extensive experiments with posterior sampling budgets that far exceed those of earlier works, and demonstrate how overparametrization induces structured, prior-aligned weight posterior distributions.
Problem

Research questions and friction points this paper is trying to address.

Bayesian neural networks
posterior geometry
overparametrization
priors
non-identifiability
Innovation

Methods, ideas, or system contributions that make the work stand out.

overparametrization
Bayesian neural networks
posterior geometry
weight reallocation
prior conformity
๐Ÿ”Ž Similar Papers
No similar papers found.