🤖 AI Summary
Existing EEG self-supervised methods—such as masked reconstruction—focus primarily on local temporal modeling, limiting their capacity to capture long-range dependencies and relative temporal structure. To address this, we propose PAIRS, a novel pre-training framework that introduces *pairwise relative time offset prediction* for unsupervised EEG representation learning: given random pairs of EEG windows, the model regresses their relative temporal distance while jointly optimizing a Transformer-based encoder via contrastive learning. This paradigm explicitly models long-range temporal relationships across windows, overcoming the locality constraints inherent in reconstruction-based objectives. Evaluated on multi-task brain signal decoding, PAIRS consistently outperforms state-of-the-art self-supervised approaches—particularly under low-label (<1% annotated data) and cross-subject transfer settings. Our work establishes a new paradigm for unsupervised representation learning from unlabeled neural signals.
📝 Abstract
Self-supervised learning (SSL) offers a promising approach for learning electroencephalography (EEG) representations from unlabeled data, reducing the need for expensive annotations for clinical applications like sleep staging and seizure detection. While current EEG SSL methods predominantly use masked reconstruction strategies like masked autoencoders (MAE) that capture local temporal patterns, position prediction pretraining remains underexplored despite its potential to learn long-range dependencies in neural signals. We introduce PAirwise Relative Shift or PARS pretraining, a novel pretext task that predicts relative temporal shifts between randomly sampled EEG window pairs. Unlike reconstruction-based methods that focus on local pattern recovery, PARS encourages encoders to capture relative temporal composition and long-range dependencies inherent in neural signals. Through comprehensive evaluation on various EEG decoding tasks, we demonstrate that PARS-pretrained transformers consistently outperform existing pretraining strategies in label-efficient and transfer learning settings, establishing a new paradigm for self-supervised EEG representation learning.