MEGState: Phoneme Decoding from Magnetoencephalography Signals

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of phoneme decoding from non-invasive magnetoencephalography (MEG) signals—characterized by low signal-to-noise ratio (SNR) and high temporal dimensionality. We propose MEGState, the first dedicated phoneme decoding architecture for MEG, which jointly models spatiotemporal dynamics across multichannel MEG data via a hybrid design integrating temporal convolution and state-space models (SSMs). To enhance fine-grained phonemic representation learning, we introduce phoneme-level alignment supervision. Evaluated on the LibriBrain dataset, MEGState consistently outperforms existing baselines in both phoneme classification accuracy and temporal localization precision. Critically, it provides the first systematic demonstration that linguistically grounded phonemic representations are robustly decodable from low-SNR MEG signals. These results establish a new paradigm for noninvasive neural speech decoding and scalable brain–computer interfaces.

Technology Category

Application Category

📝 Abstract
Decoding linguistically meaningful representations from non-invasive neural recordings remains a central challenge in neural speech decoding. Among available neuroimaging modalities, magnetoencephalography (MEG) provides a safe and repeatable means of mapping speech-related cortical dynamics, yet its low signal-to-noise ratio and high temporal dimensionality continue to hinder robust decoding. In this work, we introduce MEGState, a novel architecture for phoneme decoding from MEG signals that captures fine-grained cortical responses evoked by auditory stimuli. Extensive experiments on the LibriBrain dataset demonstrate that MEGState consistently surpasses baseline model across multiple evaluation metrics. These findings highlight the potential of MEG-based phoneme decoding as a scalable pathway toward non-invasive brain-computer interfaces for speech.
Problem

Research questions and friction points this paper is trying to address.

Decoding phonemes from MEG signals
Overcoming low signal-to-noise ratio in MEG
Enabling non-invasive speech brain-computer interfaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel architecture for phoneme decoding from MEG signals
Captures fine-grained cortical responses to auditory stimuli
Consistently surpasses baseline models on multiple evaluation metrics
🔎 Similar Papers
No similar papers found.