UMind: A Unified Multitask Network for Zero-Shot M/EEG Visual Decoding

๐Ÿ“… 2025-09-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses key bottlenecks in EEG/MEG-based visual decodingโ€”namely, overreliance on single-task learning, insufficient cross-modal semantic alignment, and poor zero-shot generalization. To this end, we propose the first unified multi-task brain decoding framework. Methodologically, we design a spatiotemporal neural feature extraction network that integrates coarse- and fine-grained text-guided semantic representations; further, we introduce a dual-conditioned diffusion model that jointly optimizes alignment across neural, visual, and semantic modalities, enabling zero-shot visual stimulus retrieval, classification, and high-fidelity reconstruction. Extensive evaluation on multiple EEG/MEG datasets demonstrates substantial improvements in zero-shot classification and retrieval accuracy, alongside reconstructed images exhibiting enhanced biological plausibility and richer perceptual detail. Our framework establishes a scalable, multi-task paradigm for general-purpose brain decoding.

Technology Category

Application Category

๐Ÿ“ Abstract
Decoding visual information from time-resolved brain recordings, such as EEG and MEG, plays a pivotal role in real-time brain-computer interfaces. However, existing approaches primarily focus on direct brain-image feature alignment and are limited to single-task frameworks or task-specific models. In this paper, we propose a Unified MultItask Network for zero-shot M/EEG visual Decoding (referred to UMind), including visual stimulus retrieval, classification, and reconstruction, where multiple tasks mutually enhance each other. Our method learns robust neural-visual and semantic representations through multimodal alignment with both image and text modalities. The integration of both coarse and fine-grained texts enhances the extraction of these neural representations, enabling more detailed semantic and visual decoding. These representations then serve as dual conditional inputs to a pre-trained diffusion model, guiding visual reconstruction from both visual and semantic perspectives. Extensive evaluations on MEG and EEG datasets demonstrate the effectiveness, robustness, and biological plausibility of our approach in capturing spatiotemporal neural dynamics. Our approach sets a multitask pipeline for brain visual decoding, highlighting the synergy of semantic information in visual feature extraction.
Problem

Research questions and friction points this paper is trying to address.

Decoding visual information from EEG and MEG brain recordings
Overcoming limitations of single-task brain-image alignment approaches
Enabling zero-shot visual stimulus retrieval, classification and reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified multitask network for zero-shot decoding
Multimodal alignment with image and text
Dual conditional inputs to diffusion model
Chengjian Xu
Chengjian Xu
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
machine learningbrain-computer interface
Yonghao Song
Yonghao Song
Tsinghua University
Brain-Computer InterfaceMachine Learning
Z
Zelin Liao
School of Automation, Guangdong University of Technology, China
H
Haochuan Zhang
School of Automation, Guangdong University of Technology, China
Q
Qiong Wang
Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, China
Qingqing Zheng
Qingqing Zheng
Associate Professor, Shenzhen University of Advanced Technology
machine learningcomputer visionbrain-computer interfaces