E^2-LLM: Bridging Neural Signals and Interpretable Affective Analysis

📅 2026-01-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of large inter-subject variability, scarce labeled data, and limited interpretability in EEG-based emotion recognition by proposing the first multimodal large language model framework tailored for EEG signals. The approach integrates a pretrained EEG encoder, the Qwen large language model, and a learnable projection layer, leveraging emotion-discriminative pretraining, cross-modal alignment, and instruction tuning. Notably, it introduces a chain-of-thought reasoning mechanism—the first of its kind in this domain—to enhance model transparency and decision logic. Evaluated on a seven-class emotion dataset, the model achieves state-of-the-art classification performance and demonstrates superior generalization capabilities under zero-shot settings and complex scenarios, while offering improved interpretability through its reasoning process.

Technology Category

Application Category

📝 Abstract
Emotion recognition from electroencephalography (EEG) signals remains challenging due to high inter-subject variability, limited labeled data, and the lack of interpretable reasoning in existing approaches. While recent multimodal large language models (MLLMs) have advanced emotion analysis, they have not been adapted to handle the unique spatiotemporal characteristics of neural signals. We present E^2-LLM (EEG-to-Emotion Large Language Model), the first MLLM framework for interpretable emotion analysis from EEG. E^2-LLM integrates a pretrained EEG encoder with Qwen-based LLMs through learnable projection layers, employing a multi-stage training pipeline that encompasses emotion-discriminative pretraining, cross-modal alignment, and instruction tuning with chain-of-thought reasoning. We design a comprehensive evaluation protocol covering basic emotion prediction, multi-task reasoning, and zero-shot scenario understanding. Experiments on the dataset across seven emotion categories demonstrate that E^2-LLM achieves excellent performance on emotion classification, with larger variants showing enhanced reliability and superior zero-shot generalization to complex reasoning scenarios. Our work establishes a new paradigm combining physiological signals with LLM reasoning capabilities, showing that model scaling improves both recognition accuracy and interpretable emotional understanding in affective computing.
Problem

Research questions and friction points this paper is trying to address.

EEG
emotion recognition
inter-subject variability
interpretable reasoning
affective computing
Innovation

Methods, ideas, or system contributions that make the work stand out.

EEG-to-Emotion
multimodal large language model
interpretable emotion analysis
chain-of-thought reasoning
zero-shot generalization
🔎 Similar Papers
No similar papers found.
F
Fei Ma
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
H
Han Lin
Zhejiang University
Yifan Xie
Yifan Xie
Tsinghua University
Embodied AI3D Vision
H
Hongwei Ren
Harbin Institute of Technology
Xiaoyu Shen
Xiaoyu Shen
Eastern Institute of Technology, Ningbo
language modelmulti-modal learningreasoning
Wenbo Ding
Wenbo Ding
UNIVERSITY AT BUFFALO
securityMachine Learning
Q
Qi Tian
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ); Huawei