🤖 AI Summary
Existing predictive coding (PC) models rely solely on maximum-a-posteriori (MAP) estimation of hidden states and maximum-likelihood (ML) estimation of parameters, rendering them incapable of quantifying cognitive uncertainty. Method: We propose Bayesian Predictive Coding (BPC), the first PC framework that performs full Bayesian inference over network parameters while preserving biologically plausible local connectivity constraints. BPC derives closed-form, Hebbian-style variational posterior updates by unifying free-energy minimization with variational Bayesian inference. Contribution/Results: Experiments demonstrate that BPC achieves faster convergence under full-batch training, matches state-of-the-art Bayesian deep learning methods in mini-batch settings, and significantly improves training stability and generalization robustness—enabling statistically principled uncertainty quantification within a neurobiologically grounded architecture.
📝 Abstract
Predictive coding (PC) is an influential theory of information processing in the brain, providing a biologically plausible alternative to backpropagation. It is motivated in terms of Bayesian inference, as hidden states and parameters are optimised via gradient descent on variational free energy. However, implementations of PC rely on maximum extit{a posteriori} (MAP) estimates of hidden states and maximum likelihood (ML) estimates of parameters, limiting their ability to quantify epistemic uncertainty. In this work, we investigate a Bayesian extension to PC that estimates a posterior distribution over network parameters. This approach, termed Bayesian Predictive coding (BPC), preserves the locality of PC and results in closed-form Hebbian weight updates. Compared to PC, our BPC algorithm converges in fewer epochs in the full-batch setting and remains competitive in the mini-batch setting. Additionally, we demonstrate that BPC offers uncertainty quantification comparable to existing methods in Bayesian deep learning, while also improving convergence properties. Together, these results suggest that BPC provides a biologically plausible method for Bayesian learning in the brain, as well as an attractive approach to uncertainty quantification in deep learning.