🤖 AI Summary
This study addresses the bottleneck of patient-specific training in closed-loop neuromodulation by proposing a symptom decoding paradigm that requires no individual fine-tuning. Methodologically, we design a pre-trained Transformer architecture tailored to neurophysiological signal characteristics: it incorporates a novel masked autoencoding loss function that suppresses 1/f power-law bias and supports modeling of 30-minute-long temporal sequences via a general-purpose foundation model. Pretraining and cross-subject transfer are performed on chronically recorded deep brain stimulation (DBS) data. In leave-one-subject-out cross-validation, our model achieves the first unsupervised, zero-shot decoding of Parkinson’s disease motor symptoms—without any subject-specific labeled data—and significantly outperforms baseline methods in generalization. This work establishes a scalable foundation model framework for personalized, deployable closed-loop neuromodulation systems.
📝 Abstract
Neural decoding of pathological and physiological states can enable patient-individualized closed-loop neuromodulation therapy. Recent advances in pre-trained large-scale foundation models offer the potential for generalized state estimation without patient-individual training. Here we present a foundation model trained on chronic longitudinal deep brain stimulation recordings spanning over 24 days. Adhering to long time-scale symptom fluctuations, we highlight the extended context window of 30 minutes. We present an optimized pre-training loss function for neural electrophysiological data that corrects for the frequency bias of common masked auto-encoder loss functions due to the 1-over-f power law. We show in a downstream task the decoding of Parkinson's disease symptoms with leave-one-subject-out cross-validation without patient-individual training.