🤖 AI Summary
To address key challenges in epileptic EEG signal classification—namely non-stationarity, low signal-to-noise ratio, and scarcity of labeled data—this paper proposes a novel classification framework integrating Universum learning with generalized eigenvalue decomposition. We introduce two models: U-GEPSVM and its improved variant IU-GEPSVM. Crucially, we formulate a ratio-type objective function and a weighted differential formulation, enabling decoupled optimization that simultaneously maximizes inter-class separation and aligns with Universum-based prior knowledge—thereby enhancing generalization and robustness. Evaluated on the Bonn dataset, IU-GEPSVM achieves peak accuracies of 85.0% (O vs. S) and 80.0% (Z vs. S), with mean accuracies of 81.29% and 77.57%, respectively—outperforming state-of-the-art baselines. This work establishes a new paradigm for automatic, small-sample, high-noise EEG detection, offering strong interpretability and superior stability.
📝 Abstract
The paper presents novel Universum-enhanced classifiers: the Universum Generalized Eigenvalue Proximal Support Vector Machine (U-GEPSVM) and the Improved U-GEPSVM (IU-GEPSVM) for EEG signal classification. Using the computational efficiency of generalized eigenvalue decomposition and the generalization benefits of Universum learning, the proposed models address critical challenges in EEG analysis: non-stationarity, low signal-to-noise ratio, and limited labeled data. U-GEPSVM extends the GEPSVM framework by incorporating Universum constraints through a ratio-based objective function, while IU-GEPSVM enhances stability through a weighted difference-based formulation that provides independent control over class separation and Universum alignment. The models are evaluated on the Bonn University EEG dataset across two binary classification tasks: (O vs S)-healthy (eyes closed) vs seizure, and (Z vs S)-healthy (eyes open) vs seizure. IU-GEPSVM achieves peak accuracies of 85% (O vs S) and 80% (Z vs S), with mean accuracies of 81.29% and 77.57% respectively, outperforming baseline methods.