Learning the relative composition of EEG signals using pairwise relative shift pretraining

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing EEG self-supervised methods—such as masked reconstruction—focus primarily on local temporal modeling, limiting their capacity to capture long-range dependencies and relative temporal structure. To address this, we propose PAIRS, a novel pre-training framework that introduces *pairwise relative time offset prediction* for unsupervised EEG representation learning: given random pairs of EEG windows, the model regresses their relative temporal distance while jointly optimizing a Transformer-based encoder via contrastive learning. This paradigm explicitly models long-range temporal relationships across windows, overcoming the locality constraints inherent in reconstruction-based objectives. Evaluated on multi-task brain signal decoding, PAIRS consistently outperforms state-of-the-art self-supervised approaches—particularly under low-label (<1% annotated data) and cross-subject transfer settings. Our work establishes a new paradigm for unsupervised representation learning from unlabeled neural signals.

Technology Category

Application Category

📝 Abstract
Self-supervised learning (SSL) offers a promising approach for learning electroencephalography (EEG) representations from unlabeled data, reducing the need for expensive annotations for clinical applications like sleep staging and seizure detection. While current EEG SSL methods predominantly use masked reconstruction strategies like masked autoencoders (MAE) that capture local temporal patterns, position prediction pretraining remains underexplored despite its potential to learn long-range dependencies in neural signals. We introduce PAirwise Relative Shift or PARS pretraining, a novel pretext task that predicts relative temporal shifts between randomly sampled EEG window pairs. Unlike reconstruction-based methods that focus on local pattern recovery, PARS encourages encoders to capture relative temporal composition and long-range dependencies inherent in neural signals. Through comprehensive evaluation on various EEG decoding tasks, we demonstrate that PARS-pretrained transformers consistently outperform existing pretraining strategies in label-efficient and transfer learning settings, establishing a new paradigm for self-supervised EEG representation learning.
Problem

Research questions and friction points this paper is trying to address.

Learning EEG signal composition through pairwise relative shift pretraining
Capturing long-range dependencies in neural signals via temporal shifts
Improving EEG decoding tasks with self-supervised representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pairwise relative shift pretraining for EEG signals
Predicting temporal shifts between EEG window pairs
Capturing long-range dependencies in neural signals
🔎 Similar Papers
No similar papers found.