Predictive Coding-based Deep Neural Network Fine-tuning for Computationally Efficient Domain Adaptation

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation of deep models under dynamic environmental shifts—such as sensor drift and illumination changes—this paper proposes a hybrid online domain adaptation method integrating backpropagation (BP) with predictive coding. The approach first establishes a robust representation via offline BP pretraining, then enables lightweight, local error-driven parameter updates through online predictive coding, balancing representational capacity and computational efficiency. Its key innovation lies in the first use of differentiable, low-overhead predictive coding as an online fine-tuning mechanism, specifically designed for resource-constrained edge devices and neuromorphic hardware. Experiments on MNIST and CIFAR-10 demonstrate that the method reduces computational cost by approximately 62% compared to pure BP-based online updating, while effectively mitigating accuracy loss and significantly enhancing model robustness and stability under continual distributional shift.

Technology Category

Application Category

📝 Abstract
As deep neural networks are increasingly deployed in dynamic, real-world environments, relying on a single static model is often insufficient. Changes in input data distributions caused by sensor drift or lighting variations necessitate continual model adaptation. In this paper, we propose a hybrid training methodology that enables efficient on-device domain adaptation by combining the strengths of Backpropagation and Predictive Coding. The method begins with a deep neural network trained offline using Backpropagation to achieve high initial performance. Subsequently, Predictive Coding is employed for online adaptation, allowing the model to recover accuracy lost due to shifts in the input data distribution. This approach leverages the robustness of Backpropagation for initial representation learning and the computational efficiency of Predictive Coding for continual learning, making it particularly well-suited for resource-constrained edge devices or future neuromorphic accelerators. Experimental results on the MNIST and CIFAR-10 datasets demonstrate that this hybrid strategy enables effective adaptation with a reduced computational overhead, offering a promising solution for maintaining model performance in dynamic environments.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning deep neural networks for efficient domain adaptation
Addressing input data distribution shifts in dynamic environments
Enabling on-device adaptation with reduced computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid training combines Backpropagation and Predictive Coding
Predictive Coding enables efficient online domain adaptation
Method reduces computational overhead for resource-constrained devices
🔎 Similar Papers
No similar papers found.
M
Matteo Cardoni
IDLab, Department of Information and Technology, Ghent University—imec, 9052 Ghent, Belgium
Sam Leroux
Sam Leroux
Assistant professor, Ghent University - imec
Resource efficient deep learningmachine learning on the edgedistributed machine learningTinyML