🤖 AI Summary
This study addresses critical bottlenecks in EEG-to-multimodal perceptual content decoding—namely, the absence of standardized datasets, poor cross-subject generalizability, and inconsistent evaluation protocols. Systematically reviewing 1,800 publications under PRISMA guidelines, we construct the first end-to-end research taxonomy covering preprocessing, alignment modeling, generative decoding, and multimodal evaluation. Methodologically, we prioritize the development of standardized cross-subject EEG datasets and integrate state-of-the-art generative architectures—including GANs, VAEs, and Transformers—to distill the current optimal decoding paradigm. Our analysis reveals fundamental performance ceilings tightly coupled with data quantity and quality, underscoring strong data dependency. Results advance decoding accuracy and facilitate real-world deployment in clinical neurofeedback and brain–computer interfaces. This work establishes a comprehensive roadmap for generative neural decoding, bridging foundational research and translational applications.
📝 Abstract
Electroencephalography (EEG) is an invaluable tool in neuroscience, offering insights into brain activity with high temporal resolution. Recent advancements in machine learning and generative modeling have catalyzed the application of EEG in reconstructing perceptual experiences, including images, videos, and audio. This paper systematically reviews EEG-to-output research, focusing on state-of-the-art generative methods, evaluation metrics, and data challenges. Using PRISMA guidelines, we analyze 1800 studies and identify key trends, challenges, and opportunities in the field. The findings emphasize the potential of advanced models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformers, while highlighting the pressing need for standardized datasets and cross-subject generalization. A roadmap for future research is proposed that aims to improve decoding accuracy and broadening real-world applications.