🤖 AI Summary
To address the high computational cost and poor edge-deployment feasibility of gradient-based feature fine-tuning in unsupervised domain adaptation (UDA), this paper proposes a brain-inspired, gradient-free, distributed memory learning mechanism. Instead of backpropagation, the method employs sparsely connected spiking neurons to propagate cross-domain features, integrates distributed memory encoding with confidence-weighted associative decision-making, and updates memory modules online for rapid adaptation. Evaluated on four real-world cross-domain benchmarks, it achieves an average accuracy gain of over 10% compared to gradient-based MLPs, while reducing model optimization time by 87%, significantly enhancing real-time performance and edge deployability. The core contribution lies in pioneering the integration of a gradient-free, spike-driven, memory-augmented distributed learning paradigm into lightweight domain adaptation—marking the first such application in this setting.
📝 Abstract
Compared with gradient based artificial neural networks, biological neural networks usually show a more powerful generalization ability to quickly adapt to unknown environments without using any gradient back-propagation procedure. Inspired by the distributed memory mechanism of human brains, we propose a novel gradient-free Distributed Memorization Learning mechanism, namely DML, to support quick domain adaptation of transferred models. In particular, DML adopts randomly connected neurons to memorize the association of input signals, which are propagated as impulses, and makes the final decision by associating the distributed memories based on their confidence. More importantly, DML is able to perform reinforced memorization based on unlabeled data to quickly adapt to a new domain without heavy fine-tuning of deep features, which makes it very suitable for deploying on edge devices. Experiments based on four cross-domain real-world datasets show that DML can achieve superior performance of real-time domain adaptation compared with traditional gradient based MLP with more than 10% improvement of accuracy while reducing 87% of the timing cost of optimization.