๐ค AI Summary
To address the challenge of jointly achieving fine-grained time-frequency modeling and clinical interpretability in EEG-based sleep staging, this paper proposes a hierarchical visionโlanguage model. Methodologically, it introduces a vision-enhancement module to generate high-level semantic tokens, integrates multi-level feature alignment to fuse time-frequency representations with CLIP-pretrained linguistic priors, and incorporates a chain-of-thought (CoT) reasoning module to emulate expert decision-making logic. Unlike existing approaches, our model eliminates handcrafted features while preserving end-to-end learnability, significantly improving both discriminative accuracy and clinical interpretability. Evaluated on public EEG sleep staging datasets, it achieves a 3.2% absolute gain in classification accuracy; moreover, its generated CoT reasoning paths exhibit strong consistency with clinical annotations. This work establishes a novel paradigm for automated, trustworthy EEG analysis.
๐ Abstract
Sleep stage classification based on electroencephalography (EEG) is fundamental for assessing sleep quality and diagnosing sleep-related disorders. However, most traditional machine learning methods rely heavily on prior knowledge and handcrafted features, while existing deep learning models still struggle to jointly capture fine-grained time-frequency patterns and achieve clinical interpretability. Recently, vision-language models (VLMs) have made significant progress in the medical domain, yet their performance remains constrained when applied to physiological waveform data, especially EEG signals, due to their limited visual understanding and insufficient reasoning capability. To address these challenges, we propose EEG-VLM, a hierarchical vision-language framework that integrates multi-level feature alignment with visually enhanced language-guided reasoning for interpretable EEG-based sleep stage classification. Specifically, a specialized visual enhancement module constructs high-level visual tokens from intermediate-layer features to extract rich semantic representations of EEG images. These tokens are further aligned with low-level CLIP features through a multi-level alignment mechanism, enhancing the VLM's image-processing capability. In addition, a Chain-of-Thought (CoT) reasoning strategy decomposes complex medical inference into interpretable logical steps, effectively simulating expert-like decision-making. Experimental results demonstrate that the proposed method significantly improves both the accuracy and interpretability of VLMs in EEG-based sleep stage classification, showing promising potential for automated and explainable EEG analysis in clinical settings.