🤖 AI Summary
This work addresses the poor generalization of universal models for electroencephalogram (EEG) signal decoding, which stems from the non-stationarity of EEG across sessions and individuals. To enable personalized, adaptive decoding on resource-constrained platforms, the authors propose deploying spiking neural networks (SNNs) on ferroelectric memristive synapse hardware. They introduce two strategies: device-aware training and software pre-training followed by on-chip fine-tuning of only the output layer. Both leverage a threshold-triggered discrete weight update mechanism that digitizes accumulated gradients and converts them into programming events upon threshold crossing, thereby emulating device nonlinear dynamics while reducing write overhead. Experiments demonstrate that both approaches achieve classification performance on par with state-of-the-art software-based SNNs, and individualized fine-tuning significantly improves motor imagery recognition accuracy, confirming the feasibility of low-overhead adaptive neural signal processing on ferroelectric hardware.
📝 Abstract
Electroencephalography (EEG)-based brain-computer interfaces (BCIs) are strongly affected by non-stationary neural signals that vary across sessions and individuals, limiting the generalization of subject-agnostic models and motivating adaptive and personalized learning on resource-constrained platforms. Programmable memristive hardware offers a promising substrate for such post-deployment adaptation; however, practical realization is challenged by limited weight resolution, device variability, nonlinear programming dynamics, and finite device endurance. In this work, we show that spiking neural networks (SNNs) can be deployed on ferroelectric memristive synaptic devices for adaptive EEG-based motor imagery decoding under realistic device constraints. We fabricate, characterize, and model ferroelectric synapses. We evaluate a convolutional-recurrent SNN architecture under two complementary deployment strategies: (i) device-aware training using a ferroelectric synapse model, and (ii) transfer of software-trained weights followed by low-overhead on-device re-tuning. To enable efficient adaptation, we introduce a device-aware weight-update strategy in which gradient-based updates are accumulated digitally and converted into discrete programming events only when a threshold is exceeded, emulating nonlinear, state-dependent programming dynamics while reducing programming frequency. Both deployment strategies achieve classification performance comparable to state-of-the-art software-based SNNs. Furthermore, subject-specific transfer learning achieved by retraining only the final network layers improves classification accuracy. These results demonstrate that programmable ferroelectric hardware can support robust, low-overhead adaptation in spiking neural networks, opening a practical path toward personalized neuromorphic processing of neural signals.