๐ค AI Summary
This study addresses key bottlenecks in EEG/MEG-based visual decodingโnamely, overreliance on single-task learning, insufficient cross-modal semantic alignment, and poor zero-shot generalization. To this end, we propose the first unified multi-task brain decoding framework. Methodologically, we design a spatiotemporal neural feature extraction network that integrates coarse- and fine-grained text-guided semantic representations; further, we introduce a dual-conditioned diffusion model that jointly optimizes alignment across neural, visual, and semantic modalities, enabling zero-shot visual stimulus retrieval, classification, and high-fidelity reconstruction. Extensive evaluation on multiple EEG/MEG datasets demonstrates substantial improvements in zero-shot classification and retrieval accuracy, alongside reconstructed images exhibiting enhanced biological plausibility and richer perceptual detail. Our framework establishes a scalable, multi-task paradigm for general-purpose brain decoding.
๐ Abstract
Decoding visual information from time-resolved brain recordings, such as EEG and MEG, plays a pivotal role in real-time brain-computer interfaces. However, existing approaches primarily focus on direct brain-image feature alignment and are limited to single-task frameworks or task-specific models. In this paper, we propose a Unified MultItask Network for zero-shot M/EEG visual Decoding (referred to UMind), including visual stimulus retrieval, classification, and reconstruction, where multiple tasks mutually enhance each other. Our method learns robust neural-visual and semantic representations through multimodal alignment with both image and text modalities. The integration of both coarse and fine-grained texts enhances the extraction of these neural representations, enabling more detailed semantic and visual decoding. These representations then serve as dual conditional inputs to a pre-trained diffusion model, guiding visual reconstruction from both visual and semantic perspectives. Extensive evaluations on MEG and EEG datasets demonstrate the effectiveness, robustness, and biological plausibility of our approach in capturing spatiotemporal neural dynamics. Our approach sets a multitask pipeline for brain visual decoding, highlighting the synergy of semantic information in visual feature extraction.