Commuting Distance Regularization for Timescale-Dependent Label Inconsistency in EEG Emotion Recognition

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Temporal-scale discrepancy-induced label inconsistency (TsDLI) poses a critical challenge in EEG-based emotion recognition, where misaligned temporal annotations introduce severe label noise. Method: We propose two novel regularization strategies—Local Variation Loss (LVL) and Local-Global Consistency Loss (LGCL)—grounded in bounded variation function modeling and commute-time distance–based graph construction. This forms a robust graph-regularized framework for mitigating temporal label noise. Our architecture integrates graph neural networks, LSTM, and Transformer modules to enable multi-scale emotion prediction while preserving model interpretability and generalization. Results: On DREAMER and DEAP datasets, LVL and LGCL consistently achieve state-of-the-art and second-best performance across diverse backbone networks, significantly improving classification accuracy and label consistency. To the best of our knowledge, this work provides the first systematic theoretical and practical solution to TsDLI in EEG emotion recognition.

Technology Category

Application Category

📝 Abstract
In this work, we address the often-overlooked issue of Timescale Dependent Label Inconsistency (TsDLI) in training neural network models for EEG-based human emotion recognition. To mitigate TsDLI and enhance model generalization and explainability, we propose two novel regularization strategies: Local Variation Loss (LVL) and Local-Global Consistency Loss (LGCL). Both methods incorporate classical mathematical principles--specifically, functions of bounded variation and commute-time distances--within a graph theoretic framework. Complementing our regularizers, we introduce a suite of new evaluation metrics that better capture the alignment between temporally local predictions and their associated global emotion labels. We validate our approach through comprehensive experiments on two widely used EEG emotion datasets, DREAMER and DEAP, across a range of neural architectures including LSTM and transformer-based models. Performance is assessed using five distinct metrics encompassing both quantitative accuracy and qualitative consistency. Results consistently show that our proposed methods outperform state-of-the-art baselines, delivering superior aggregate performance and offering a principled trade-off between interpretability and predictive power under label inconsistency. Notably, LVL achieves the best aggregate rank across all benchmarked backbones and metrics, while LGCL frequently ranks the second, highlighting the effectiveness of our framework.
Problem

Research questions and friction points this paper is trying to address.

Addressing timescale-dependent label inconsistency in EEG emotion recognition
Enhancing model generalization and explainability via novel regularization strategies
Improving alignment between local predictions and global emotion labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local Variation Loss (LVL) for regularization
Local-Global Consistency Loss (LGCL) for regularization
Graph theoretic framework with bounded variation
🔎 Similar Papers
No similar papers found.
X
Xiaocong Zeng
School of Mathematics (Zhuhai), Sun Yat-sen University, Guangdong, 519082, CHINA
C
Craig Michoski
ODEN Institute for Computational Engineering & Sciences, University of Texas at Austin, Austin, Texas 78712-1229, USA
Yan Pang
Yan Pang
University of Colorado
Computer VisionMedical Image AnalysisGraph Neural Network
D
Dongyang Kuang
School of Mathematics (Zhuhai), Sun Yat-sen University, Guangdong, 519082, CHINA