Pre-trained Transformer-models using chronic invasive electrophysiology for symptom decoding without patient-individual training

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the bottleneck of patient-specific training in closed-loop neuromodulation by proposing a symptom decoding paradigm that requires no individual fine-tuning. Methodologically, we design a pre-trained Transformer architecture tailored to neurophysiological signal characteristics: it incorporates a novel masked autoencoding loss function that suppresses 1/f power-law bias and supports modeling of 30-minute-long temporal sequences via a general-purpose foundation model. Pretraining and cross-subject transfer are performed on chronically recorded deep brain stimulation (DBS) data. In leave-one-subject-out cross-validation, our model achieves the first unsupervised, zero-shot decoding of Parkinson’s disease motor symptoms—without any subject-specific labeled data—and significantly outperforms baseline methods in generalization. This work establishes a scalable foundation model framework for personalized, deployable closed-loop neuromodulation systems.

Technology Category

Application Category

📝 Abstract
Neural decoding of pathological and physiological states can enable patient-individualized closed-loop neuromodulation therapy. Recent advances in pre-trained large-scale foundation models offer the potential for generalized state estimation without patient-individual training. Here we present a foundation model trained on chronic longitudinal deep brain stimulation recordings spanning over 24 days. Adhering to long time-scale symptom fluctuations, we highlight the extended context window of 30 minutes. We present an optimized pre-training loss function for neural electrophysiological data that corrects for the frequency bias of common masked auto-encoder loss functions due to the 1-over-f power law. We show in a downstream task the decoding of Parkinson's disease symptoms with leave-one-subject-out cross-validation without patient-individual training.
Problem

Research questions and friction points this paper is trying to address.

Decoding Parkinson's symptoms without patient-specific training
Generalized state estimation using pre-trained foundation models
Correcting frequency bias in neural electrophysiological data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trained Transformer models for symptom decoding
Chronic invasive electrophysiology with 30-minute context
Optimized loss function correcting frequency bias
🔎 Similar Papers
No similar papers found.
Timon Merk
Timon Merk
PostDoc - Baylor College of Medicine
Machine LearningAdaptive Deep Brain StimulationMotor DisordersNeural Decoding
Saeed Salehi
Saeed Salehi
Herman Brown Endowed Chair in Engineering and Professor, Southern Methodist University
Subsurface energyGeothermalPetroleum and Natural GasWell IntegrityDrilling
R
Richard M. Koehler
Movement disorder and Neuromodulation Unit Charité University Medicine Berlin Berlin, Germany
Qiming Cui
Qiming Cui
PhD Candidate, UCSF
medical image analysiscomputer visionmultimodal LLMs
M
Maria Olaru
Department of Neurological Surgery University of California San Francisco San Francisco, United States of America
Amelia Hahn
Amelia Hahn
Loyola University Chicago Stritch School of Medicine
N
Nicole R. Provenza
Department of Neurosurgery Baylor College of Medicine, Houston Houston, United States of America
Simon Little
Simon Little
Associate Professor Neurology, University of California San Francisco
parkinson's diseasereward based decision makingdeep brain stimulationlocal field potential
Reza Abbasi-Asl
Reza Abbasi-Asl
Associate Professor of Neurology and Bioengineering | UCSF
Machine LearningComputational NeuroscienceApplied Statistics
P
Phil A. Starr
Department of Neurological Surgery University of California San Francisco San Francisco, United States of America
W
Wolf-Julian Neumann
Movement disorder and Neuromodulation Unit Charité University Medicine Berlin Berlin, Germany