AttDiCNN: Attentive Dilated Convolutional Neural Network for Automatic Sleep Staging using Visibility Graph and Force-directed Layout

📅 2024-08-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high data heterogeneity, excessive computational cost, and insufficient model reliability in sleep staging, this paper proposes LSFE-S2TLR-G2A, an end-to-end deep learning framework. It introduces a novel three-level collaborative architecture that jointly models EEG spatiotemporal dynamics via visibility graphs and force-directed layouts. A multi-scale dilated convolution module coupled with long-sequence modeling enhances feature representation, while a global average attention mechanism achieves high accuracy with significantly reduced parameter count (1.4M). Evaluated on three public benchmarks—EDFX, HMC, and NCH—the framework attains accuracies of 98.56%, 99.66%, and 99.08%, respectively, surpassing state-of-the-art methods across all datasets. LSFE-S2TLR-G2A thus delivers superior performance, low computational complexity, and strong generalizability for robust sleep staging.

Technology Category

Application Category

📝 Abstract
Sleep stages play an essential role in the identification of sleep patterns and the diagnosis of sleep disorders. In this study, we present an automated sleep stage classifier termed the Attentive Dilated Convolutional Neural Network (AttDiCNN), which uses deep learning methodologies to address challenges related to data heterogeneity, computational complexity, and reliable automatic sleep staging. We employed a force-directed layout based on the visibility graph to capture the most significant information from the EEG signals, representing the spatial-temporal features. The proposed network consists of three compositors: the Localized Spatial Feature Extraction Network (LSFE), the Spatio-Temporal-Temporal Long Retention Network (S2TLR), and the Global Averaging Attention Network (G2A). The LSFE is tasked with capturing spatial information from sleep data, the S2TLR is designed to extract the most pertinent information in long-term contexts, and the G2A reduces computational overhead by aggregating information from the LSFE and S2TLR. We evaluated the performance of our model on three comprehensive and publicly accessible datasets, achieving state-of-the-art accuracy of 98.56%, 99.66%, and 99.08% for the EDFX, HMC, and NCH datasets, respectively, yet maintaining a low computational complexity with 1.4 M parameters. The results substantiate that our proposed architecture surpasses existing methodologies in several performance metrics, thus proving its potential as an automated tool in clinical settings.
Problem

Research questions and friction points this paper is trying to address.

Automating sleep stage classification using deep learning
Addressing data heterogeneity and computational complexity challenges
Extracting spatial-temporal features from EEG signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attentive Dilated Convolutional Neural Network for automated sleep staging
Force-directed layout captures spatial-temporal EEG signal features
Three-module architecture reduces computational overhead while maintaining accuracy
🔎 Similar Papers
No similar papers found.