🤖 AI Summary
Existing non-invasive brain-computer interface (BCI) public benchmarks focus primarily on foundational tasks such as speech detection, lacking intermediate tasks aligned with real-world applications like brain-to-text translation.
Method: This work introduces keyword spotting (KWS) as the first non-invasive neural decoding task bridging practical utility and privacy preservation. Leveraging the 52-hour within-subject LibriBrain dataset, we establish a standardized train/val/test split protocol and propose a joint evaluation metric—AUPRC coupled with false positive rate—specifically designed for extreme class imbalance. We employ 1D Conv/ResNet architectures, focal loss, and top-k pooling; we also release a word-level data loader and Colab tutorial.
Contribution/Results: Our reference model achieves 13× AUPRC over permutation baselines on held-out sessions, validating feasibility. We systematically characterize how word frequency and duration affect detectability and discover a log-linear scaling law for within-subject performance.
📝 Abstract
Non-invasive brain-computer interfaces (BCIs) are beginning to benefit from large, public benchmarks. However, current benchmarks target relatively simple, foundational tasks like Speech Detection and Phoneme Classification, while application-ready results on tasks like Brain-to-Text remain elusive. We propose Keyword Spotting (KWS) as a practically applicable, privacy-aware intermediate task. Using the deep 52-hour, within-subject LibriBrain corpus, we provide standardized train/validation/test splits for reproducible benchmarking, and adopt an evaluation protocol tailored to extreme class imbalance. Concretely, we use area under the precision-recall curve (AUPRC) as a robust evaluation metric, complemented by false alarms per hour (FA/h) at fixed recall to capture user-facing trade-offs. To simplify deployment and further experimentation within the research community, we are releasing an updated version of the pnpl library with word-level dataloaders and Colab-ready tutorials. As an initial reference model, we present a compact 1-D Conv/ResNet baseline with focal loss and top-k pooling that is trainable on a single consumer-class GPU. The reference model achieves approximately 13x the permutation baseline AUPRC on held-out sessions, demonstrating the viability of the task. Exploratory analyses reveal: (i) predictable within-subject scaling - performance improves log-linearly with more training hours - and (ii) the existence of word-level factors (frequency and duration) that systematically modulate detectability.