🤖 AI Summary
Real-time processing, individual adaptability, and clinical interactivity remain critical challenges in EEG-based affective computing and brain–computer interfaces. Method: We propose the first end-to-end lightweight large language model (LLM) paradigm specifically designed for affective EEG interpretation—a localized LLM with only 0.5B parameters. Our approach introduces novel EEG time-frequency feature encoding, task-specific prompt engineering, and a joint optimization strategy combining structured pruning and supervised fine-tuning. Results: The model significantly outperforms comparably sized (1.5B) and larger (7B) LLMs in both emotion classification accuracy and electronic health record (EHR) generation quality. It enables millisecond-level edge inference, supports personalized psychiatric recommendations, and automates clinical documentation—thereby substantially enhancing real-time responsiveness, individual adaptability, and clinical utility in mental health monitoring.
📝 Abstract
In the fields of affective computing (AC) and brain-machine interface (BMI), the analysis of physiological and behavioral signals to discern individual emotional states has emerged as a critical research frontier. While deep learning-based approaches have made notable strides in EEG emotion recognition, particularly in feature extraction and pattern recognition, significant challenges persist in achieving end-to-end emotion computation, including real-time processing, individual adaptation, and seamless user interaction. This paper presents the EEG Emotion Copilot, a system optimizing a lightweight large language model (LLM) with 0.5B parameters operating in a local setting, which first recognizes emotional states directly from EEG signals, subsequently generates personalized diagnostic and treatment suggestions, and finally supports the automation of assisted electronic medical records. Specifically, we demonstrate the critical techniques in the novel data structure of prompt, model pruning and fine-tuning training, and deployment strategies aiming at improving real-time performance and computational efficiency. Extensive experiments show that our optimized lightweight LLM-based copilot achieves an enhanced intuitive interface for participant interaction, superior accuracy of emotion recognition and assisted electronic medical records generation, in comparison to such models with similar scale parameters or large-scale parameters such as 1.5B, 1.8B, 3B and 7B. In summary, through these efforts, the proposed copilot is expected to advance the application of AC in the medical domain, offering innovative solution to mental health monitoring. The codes will be released at https://github.com/NZWANG/EEG_Emotion_Copilot.