🤖 AI Summary
This work addresses the poor generalization performance in electroencephalography (EEG) classification caused by low signal-to-noise ratios and high inter-subject variability by proposing a unified framework that integrates hyperbolic space modeling with low-rank adaptation. The method introduces Lorentzian attention—applied for the first time to cross-subject EEG modeling—together with an InceptionTime encoder for extracting shared features, and employs a Lorentz low-rank adapter to efficiently capture individual differences. Leveraging a two-stage strategy of cross-subject pretraining followed by personalized fine-tuning, the model achieves state-of-the-art performance across three benchmark EEG datasets, demonstrating both strong generalization capability and effective subject-specific adaptation.
📝 Abstract
Electroencephalogram (EEG) classification is critical for applications ranging from medical diagnostics to brain-computer interfaces, yet it remains challenging due to the inherently low signal-to-noise ratio (SNR) and high inter-subject variability. To address these issues, we propose LAtte, a novel framework that integrates a Lorentz Attention Module with an InceptionTime-based encoder to enable robust and generalizable EEG classification. Unlike prior work, which evaluates primarily on single-subject performance, LAtte focuses on cross-subject training. First, we learn a shared baseline signal across all subjects using pretraining tasks to capture common underlying patterns. Then, we utilize novel Lorentz low-rank adapters to learn subject-specific embeddings that model individual differences. This allows us to learn a shared model that performs robustly across subjects, and can be subsequently finetuned for individual subjects or used to generalize to unseen subjects. We evaluate LAtte on three well-established EEG datasets, achieving a substantial improvement in performance over current state-of-the-art methods.