Bayesian Predictive Coding

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing predictive coding (PC) models rely solely on maximum-a-posteriori (MAP) estimation of hidden states and maximum-likelihood (ML) estimation of parameters, rendering them incapable of quantifying cognitive uncertainty. Method: We propose Bayesian Predictive Coding (BPC), the first PC framework that performs full Bayesian inference over network parameters while preserving biologically plausible local connectivity constraints. BPC derives closed-form, Hebbian-style variational posterior updates by unifying free-energy minimization with variational Bayesian inference. Contribution/Results: Experiments demonstrate that BPC achieves faster convergence under full-batch training, matches state-of-the-art Bayesian deep learning methods in mini-batch settings, and significantly improves training stability and generalization robustness—enabling statistically principled uncertainty quantification within a neurobiologically grounded architecture.

Technology Category

Application Category

📝 Abstract
Predictive coding (PC) is an influential theory of information processing in the brain, providing a biologically plausible alternative to backpropagation. It is motivated in terms of Bayesian inference, as hidden states and parameters are optimised via gradient descent on variational free energy. However, implementations of PC rely on maximum extit{a posteriori} (MAP) estimates of hidden states and maximum likelihood (ML) estimates of parameters, limiting their ability to quantify epistemic uncertainty. In this work, we investigate a Bayesian extension to PC that estimates a posterior distribution over network parameters. This approach, termed Bayesian Predictive coding (BPC), preserves the locality of PC and results in closed-form Hebbian weight updates. Compared to PC, our BPC algorithm converges in fewer epochs in the full-batch setting and remains competitive in the mini-batch setting. Additionally, we demonstrate that BPC offers uncertainty quantification comparable to existing methods in Bayesian deep learning, while also improving convergence properties. Together, these results suggest that BPC provides a biologically plausible method for Bayesian learning in the brain, as well as an attractive approach to uncertainty quantification in deep learning.
Problem

Research questions and friction points this paper is trying to address.

Extends predictive coding to estimate parameter posterior distributions
Addresses limitations in quantifying epistemic uncertainty
Provides biologically plausible Bayesian learning for neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian extension to Predictive Coding
Estimates posterior distribution over parameters
Closed-form Hebbian weight updates
🔎 Similar Papers
No similar papers found.
Alexander Tschantz
Alexander Tschantz
VERSES AI, University of Sussex
Machine learningNeuroscienceActive InferenceBayesian Statistics
M
Magnus Koudahl
VERSES AI Research Lab, Los Angeles, CA, USA
H
Hampus Linander
VERSES AI Research Lab, Los Angeles, CA, USA
Lancelot Da Costa
Lancelot Da Costa
VERSES & ELLIS Institute Tübingen
Artificial IntelligenceCognitive ScienceMathematicsPhysics
Conor Heins
Conor Heins
VERSES AI Research Lab
Machine learningprobabilistic machine learningcollective behavioractive inference
Jeff Beck
Jeff Beck
Duke University
Computational Neuroscience
C
Christopher Buckley
VERSES AI Research Lab, Los Angeles, CA, USA; School of Engineering and Informatics, University of Sussex, Brighton, UK