🤖 AI Summary
To address performance degradation of deep models under dynamic environmental shifts—such as sensor drift and illumination changes—this paper proposes a hybrid online domain adaptation method integrating backpropagation (BP) with predictive coding. The approach first establishes a robust representation via offline BP pretraining, then enables lightweight, local error-driven parameter updates through online predictive coding, balancing representational capacity and computational efficiency. Its key innovation lies in the first use of differentiable, low-overhead predictive coding as an online fine-tuning mechanism, specifically designed for resource-constrained edge devices and neuromorphic hardware. Experiments on MNIST and CIFAR-10 demonstrate that the method reduces computational cost by approximately 62% compared to pure BP-based online updating, while effectively mitigating accuracy loss and significantly enhancing model robustness and stability under continual distributional shift.
📝 Abstract
As deep neural networks are increasingly deployed in dynamic, real-world environments, relying on a single static model is often insufficient. Changes in input data distributions caused by sensor drift or lighting variations necessitate continual model adaptation. In this paper, we propose a hybrid training methodology that enables efficient on-device domain adaptation by combining the strengths of Backpropagation and Predictive Coding. The method begins with a deep neural network trained offline using Backpropagation to achieve high initial performance. Subsequently, Predictive Coding is employed for online adaptation, allowing the model to recover accuracy lost due to shifts in the input data distribution. This approach leverages the robustness of Backpropagation for initial representation learning and the computational efficiency of Predictive Coding for continual learning, making it particularly well-suited for resource-constrained edge devices or future neuromorphic accelerators. Experimental results on the MNIST and CIFAR-10 datasets demonstrate that this hybrid strategy enables effective adaptation with a reduced computational overhead, offering a promising solution for maintaining model performance in dynamic environments.