๐ค AI Summary
Posterior inference in Bayesian neural networks is often hindered by symmetries, non-identifiability, and semantic ambiguity in the prior. This work systematically investigates the interplay between over-parameterization and the prior, revealing how their synergy reshapes the geometry of the posterior distribution. Through large-scale posterior sampling and geometric analysis, we identify three key phenomena induced by over-parameterization: posterior balancing, weight redistribution along equiprobable manifolds, and enhanced alignment with the prior. These findings demonstrate that appropriate over-parameterization yields a posterior distribution with clearer structure and stronger prior consistency, substantially improving the interpretability and practical utility of posterior inference in Bayesian neural networks.
๐ Abstract
Bayesian neural network (BNN) posteriors are often considered impractical for inference, as symmetries fragment them, non-identifiabilities inflate dimensionality, and weight-space priors are seen as meaningless. In this work, we study how overparametrization and priors together reshape BNN posteriors and derive implications allowing us to better understand their interplay. We show that redundancy introduces three key phenomena that fundamentally reshape the posterior geometry: balancedness, weight reallocation on equal-probability manifolds, and prior conformity. We validate our findings through extensive experiments with posterior sampling budgets that far exceed those of earlier works, and demonstrate how overparametrization induces structured, prior-aligned weight posterior distributions.