$mu$PC: Scaling Predictive Coding to 100+ Layer Networks

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Predictive coding networks (PCNs) suffer from severe training instability beyond ~100 layers, hindering their competitiveness with backpropagation (BP). Method: We propose μPC deep parameterization, the first framework enabling stable training of 128-layer residual PCNs, grounded in Depth-μP theory. It relies solely on local neural activity—no global backpropagation—and systematically identifies and mitigates multiple instabilities in deep PCNs, including gradient explosion/vanishing and activation divergence. Contribution/Results: μPC enables zero-shot transfer of weight and activation learning rates across network widths and depths. On image classification, it matches BP baselines in accuracy with minimal hyperparameter tuning and exceptional training stability. The implementation is open-sourced and extensible to CNNs and Transformers.

Technology Category

Application Category

📝 Abstract
The biological implausibility of backpropagation (BP) has motivated many alternative, brain-inspired algorithms that attempt to rely only on local information, such as predictive coding (PC) and equilibrium propagation. However, these algorithms have notoriously struggled to train very deep networks, preventing them from competing with BP in large-scale settings. Indeed, scaling PC networks (PCNs) has recently been posed as a challenge for the community (Pinchetti et al., 2024). Here, we show that 100+ layer PCNs can be trained reliably using a Depth-$mu$P parameterisation (Yang et al., 2023; Bordelon et al., 2023) which we call"$mu$PC". Through an extensive analysis of the scaling behaviour of PCNs, we reveal several pathologies that make standard PCNs difficult to train at large depths. We then show that, despite addressing only some of these instabilities, $mu$PC allows stable training of very deep (up to 128-layer) residual networks on simple classification tasks with competitive performance and little tuning compared to current benchmarks. Moreover, $mu$PC enables zero-shot transfer of both weight and activity learning rates across widths and depths. Our results have implications for other local algorithms and could be extended to convolutional and transformer architectures. Code for $mu$PC is made available as part of a JAX library for PCNs at https://github.com/thebuckleylab/jpc (Innocenti et al., 2024).
Problem

Research questions and friction points this paper is trying to address.

Scaling predictive coding to deep networks (100+ layers)
Addressing training instability in deep predictive coding networks
Enabling competitive performance without backpropagation in large networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Depth-μP parameterisation for deep networks
Enables stable training of 100+ layer PCNs
Allows zero-shot transfer of learning rates
🔎 Similar Papers
No similar papers found.