3D-Telepathy: Reconstructing 3D Objects from EEG Signals

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low signal-to-noise ratio in electroencephalography (EEG) signals and the scarcity of paired EEG–3D stimuli severely hinder the application of 3D visual stimulus reconstruction in brain–computer interfaces (BCIs). This work presents the first method to directly reconstruct geometrically consistent and semantically plausible 3D objects from single-trial EEG recordings, overcoming the longstanding limitation of EEG decoding to 2D images. Methodologically, we propose a dual self-attention EEG encoder to enhance spatiotemporal–spectral representation; design a hybrid training paradigm integrating cross-attention, contrastive learning, and self-supervision; and innovatively combine Stable Diffusion priors with variational score distillation to drive neural radiance field (NeRF)-based 3D generation. Evaluated under scarce cross-subject EEG–3D data, our approach significantly outperforms 2D baselines, establishing a scalable new paradigm for BCI applications such as augmentative and alternative communication for aphasia.

Technology Category

Application Category

📝 Abstract
Reconstructing 3D visual stimuli from Electroencephalography (EEG) data holds significant potential for applications in Brain-Computer Interfaces (BCIs) and aiding individuals with communication disorders. Traditionally, efforts have focused on converting brain activity into 2D images, neglecting the translation of EEG data into 3D objects. This limitation is noteworthy, as the human brain inherently processes three-dimensional spatial information regardless of whether observing 2D images or the real world. The neural activities captured by EEG contain rich spatial information that is inevitably lost when reconstructing only 2D images, thus limiting its practical applications in BCI. The transition from EEG data to 3D object reconstruction faces considerable obstacles. These include the presence of extensive noise within EEG signals and a scarcity of datasets that include both EEG and 3D information, which complicates the extraction process of 3D visual data. Addressing this challenging task, we propose an innovative EEG encoder architecture that integrates a dual self-attention mechanism. We use a hybrid training strategy to train the EEG Encoder, which includes cross-attention, contrastive learning, and self-supervised learning techniques. Additionally, by employing stable diffusion as a prior distribution and utilizing Variational Score Distillation to train a neural radiation field, we successfully generate 3D objects with similar content and structure from EEG data.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing 3D objects from EEG signals
Overcoming noise and data scarcity in EEG-to-3D translation
Developing an EEG encoder with dual self-attention for 3D reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual self-attention EEG encoder architecture
Hybrid training with contrastive learning
Variational Score Distillation for 3D generation