UniMind: Unleashing the Power of LLMs for Unified Multi-Task Brain Decoding

๐Ÿ“… 2025-06-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the limited generalizability and poor cross-task performance of existing EEG decoding models. We propose a universal multi-task brain decoding framework. Methodologically, it introduces (1) a neuro-linguistic connector that enables cross-modal alignment between spatiotemporal EEG features and the semantic space of large language models (LLMs), and (2) a task-aware query selection module that dynamically adapts to diverse brain decoding tasks. By integrating spatiotemporal feature extraction, cross-modal representation learning, and LLM-based semantic understanding, our approach achieves an average 12% accuracy improvement across ten public EEG datasets, substantially outperforming current state-of-the-art methods. Further neuroscientific analysis uncovers shared neural functional representations across distinct cognitive tasks, offering new insights into interpretable neural modeling and advancing AI-driven brainโ€“computer interface paradigms.

Technology Category

Application Category

๐Ÿ“ Abstract
Decoding human brain activity from electroencephalography (EEG) signals is a central challenge at the intersection of neuroscience and artificial intelligence, enabling diverse applications in mental state assessment, clinical monitoring, and human-machine interaction. Recent efforts have extensively explored EEG-based brain foundation models for generalized brain decoding, employing large-scale training on multiple datasets. However, most of these attempts struggle with generalizability and fail to achieve satisfactory performance without task-specific tuning due to pronounced inherent heterogeneity among decoding tasks. To address these challenges, we present UniMind, a general-purpose EEG foundation model for unified multi-task brain decoding by uniquely unleashing the power of large language models to comprehend complex neural patterns. UniMind offers several advantages. First, we design a Neuro-Language Connector to bridge the modality gap between neural signals and large language models, distilling and transforming the spatiotemporal neural patterns of EEG data into representations understandable by language models. Second, a Task-aware Query Selection module is proposed to inject task-awareness into the cross-modal alignment by dynamically generating task-adaptive query tokens, enabling learning of task-relevant neural patterns across diverse tasks. Extensive experiments across ten datasets demonstrate that UniMind substantially outperforms state-of-the-art multi-task decoding models, with an average gain of 12 percent, while also offering valuable neuroscientific insights into neural functional correlations across tasks. The code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Decoding EEG signals for diverse neuroscience applications
Overcoming task heterogeneity in brain decoding models
Bridging neural signals and language models for better decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuro-Language Connector bridges EEG and LLMs
Task-aware Query Selection enables dynamic task adaptation
UniMind achieves 12% better multi-task decoding
W
Weiheng Lu
Shanghai Artificial Intelligence Laboratory
Chunfeng Song
Chunfeng Song
Shanghai AI Lab
Computer VisionPattern RecognitionAI4Science
J
Jiamin Wu
Shanghai Artificial Intelligence Laboratory, The Chinese University of Hong Kong
Pengyu Zhu
Pengyu Zhu
North China Electric Power University
Artificial IntelligenceBrain-Computer InterfaceAI for SciencePattern Recognition
Y
Yuchen Zhou
Shanghai Artificial Intelligence Laboratory
Weijian Mai
Weijian Mai
University of Hong Kong
AI4NeuroGenerative ModelDeep Learning
Qihao Zheng
Qihao Zheng
Shanghai AI Lab
NeuroscienceNeuroAIAI4NeuroAI4Science
W
Wanli Ouyang
Shanghai Artificial Intelligence Laboratory, The Chinese University of Hong Kong