This EEG Looks Like These EEGs: Interpretable Interictal Epileptiform Discharge Detection With ProtoEEG-kNN

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current EEG-based interictal epileptiform discharge (IED) detection models achieve high accuracy but lack interpretability, hindering clinical trust and timely intervention. To address this, we propose ProtoEEG-kNN—a case-based reasoning framework integrating prototype learning with k-nearest neighbors—where IED detection is performed by matching test samples to training instances exhibiting both morphological similarity and consistent spatial electrode distribution. The method provides intrinsic interpretability: for each detection, it visualizes the matched prototypes’ waveform morphology and topographic scalp distribution. Evaluated on a standard benchmark dataset, ProtoEEG-kNN achieves state-of-the-art performance (sensitivity: 92.3%, specificity: 96.7%). Clinical expert assessment confirms its explanations are significantly more informative and clinically plausible than those of prevailing black-box models, thereby enhancing reliability in human-AI collaborative diagnosis.

Technology Category

Application Category

📝 Abstract
The presence of interictal epileptiform discharges (IEDs) in electroencephalogram (EEG) recordings is a critical biomarker of epilepsy. Even trained neurologists find detecting IEDs difficult, leading many practitioners to turn to machine learning for help. While existing machine learning algorithms can achieve strong accuracy on this task, most models are uninterpretable and cannot justify their conclusions. Absent the ability to understand model reasoning, doctors cannot leverage their expertise to identify incorrect model predictions and intervene accordingly. To improve the human-model interaction, we introduce ProtoEEG-kNN, an inherently interpretable model that follows a simple case-based reasoning process. ProtoEEG-kNN reasons by comparing an EEG to similar EEGs from the training set and visually demonstrates its reasoning both in terms of IED morphology (shape) and spatial distribution (location). We show that ProtoEEG-kNN can achieve state-of-the-art accuracy in IED detection while providing explanations that experts prefer over existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Detecting epileptiform discharges in EEG recordings accurately
Providing interpretable explanations for machine learning predictions
Improving human-model interaction in medical diagnosis systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses case-based reasoning for EEG analysis
Compares EEG morphology and spatial distribution
Provides visual explanations for model decisions
🔎 Similar Papers
No similar papers found.
D
Dennis Tang
Department of Computer Science, Duke University, USA
Jon Donnelly
Jon Donnelly
PhD Student at Duke University
Interpretable Machine Learning
Alina Jade Barnett
Alina Jade Barnett
Duke University
artificial intelligence
Lesia Semenova
Lesia Semenova
Assistant Professor, Rutgers University
Machine LearningInterpretabilityTrustworthy AIData Science
J
Jin Jing
Beth Israel Deaconess Medical Center, Harvard Medical School, USA
P
Peter Hadar
Massachusetts General Hospital, Harvard Medical School, USA
I
Ioannis Karakis
Department of Neurology, Emory University School of Medicine, USA
I
Ioannis Karakis
Department of Neurology, University of Crete School of Medicine, Greece
O
Olga Selioutski
Department of Neurology, Stony Brook University, USA
K
Kehan Zhao
Beth Israel Deaconess Medical Center, Harvard Medical School, USA
M
M. Brandon Westover
Beth Israel Deaconess Medical Center, Harvard Medical School, USA
Cynthia Rudin
Cynthia Rudin
Professor of Computer Science, ECE, Statistics, and Biostatistics & Bioinformatics, Duke University
machine learninginterpretabilitydata science