🤖 AI Summary
To address critical challenges in EEG-based emotion recognition—including poor model stability, weak noise robustness, and limited cross-subject generalizability—this paper introduces Lipschitz continuity constraints for the first time in this domain, establishing a synergistic “theoretically stable + data-driven” framework. Specifically, Lipschitz regularization is imposed to enforce bounded sensitivity of the input-output mapping, thereby guaranteeing theoretical stability. Complementing this, a multi-stage pipeline integrates time-frequency-spatial feature extraction, multi-source feature alignment, and weighted-voting ensemble learning to mitigate the bias-variance trade-off inherent in single models. Evaluated on EAV, FACED, and SEED benchmarks, the proposed method achieves average accuracies of 76.43%, 83.00%, and 89.22%, respectively—outperforming state-of-the-art approaches. Notably, it demonstrates superior robustness under low signal-to-noise ratio conditions and in cross-subject settings.
📝 Abstract
Accurate and efficient perception of emotional states in oneself and others is crucial, as emotion-related disorders are associated with severe psychosocial impairments. While electroencephalography (EEG) offers a powerful tool for emotion detection, current EEG-based emotion recognition (EER) methods face key limitations: insufficient model stability, limited accuracy in processing high-dimensional nonlinear EEG signals, and poor robustness against intra-subject variability and signal noise. To address these challenges, we propose LEREL (Lipschitz continuity-constrained Emotion Recognition Ensemble Learning), a novel framework that significantly enhances both the accuracy and robustness of emotion recognition performance. The LEREL framework employs Lipschitz continuity constraints to enhance model stability and generalization in EEG emotion recognition, reducing signal variability and noise susceptibility while maintaining strong performance on small-sample datasets. The ensemble learning strategy reduces single-model bias and variance through multi-classifier decision fusion, further optimizing overall performance. Experimental results on three public benchmark datasets (EAV, FACED and SEED) demonstrate LEREL's effectiveness, achieving average recognition accuracies of 76.43%, 83.00% and 89.22%, respectively.