🤖 AI Summary
Distribution shifts in cross-subject and cross-device EEG signals severely degrade model generalizability; conventional normalization methods (e.g., BatchNorm) underperform due to their neglect of temporal dependencies. Method: We propose TIN-PSD, the first test-time normalization layer leveraging power spectral density (PSD) priors and the theoretically grounded Monge map from optimal transport—enabling dynamic, temporally aware feature standardization without model retraining. Contribution/Results: TIN-PSD uniquely integrates physiologically interpretable PSD modeling with rigorous distribution alignment, achieving the first test-time EEG domain adaptation. Evaluated on ten sleep staging datasets, it establishes new state-of-the-art performance: substantial overall F1-score gains, +12.3% average F1 improvement on the most challenging 20% of subjects, and test-time generalization efficiency four times that of baseline methods—equivalent to quadrupling training data efficiency.
📝 Abstract
Distribution shift poses a significant challenge in machine learning, particularly in biomedical applications such as EEG signals collected across different subjects, institutions, and recording devices. While existing normalization layers, Batch-Norm, LayerNorm and InstanceNorm, help address distribution shifts, they fail to capture the temporal dependencies inherent in temporal signals. In this paper, we propose PSDNorm, a layer that leverages Monge mapping and temporal context to normalize feature maps in deep learning models. Notably, the proposed method operates as a test-time domain adaptation technique, addressing distribution shifts without additional training. Evaluations on 10 sleep staging datasets using the U-Time model demonstrate that PSDNorm achieves state-of-the-art performance at test time on datasets not seen during training while being 4x more data-efficient than the best baseline. Additionally, PSDNorm provides a significant improvement in robustness, achieving markedly higher F1 scores for the 20% hardest subjects.