Aligning What EEG Can See: Structural Representations for Brain-Vision Matching

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the cross-modal mismatch between EEG signals and high-level semantic embeddings from deep visual models by introducing the concept of “neural visibility.” The authors propose an EEG-visible layer selection strategy to align brain activity with intermediate layers of visual models, reflecting the hierarchical nature of human visual processing. They further develop a Hierarchical Complementary Fusion (HCF) framework that integrates multi-level visual representations to better capture the structure of neural responses. This approach achieves the first structured alignment between EEG and visual features, yielding a zero-shot visual decoding accuracy of 84.6% on the THINGS-EEG dataset—representing a 21.4% improvement over the baseline—and demonstrates an average performance gain of 129.8% across multiple EEG benchmarks.

Technology Category

Application Category

📝 Abstract
Visual decoding from electroencephalography (EEG) has emerged as a highly promising avenue for non-invasive brain-computer interfaces (BCIs). Existing EEG-based decoding methods predominantly align brain signals with the final-layer semantic embeddings of deep visual models. However, relying on these highly abstracted embeddings inevitably leads to severe cross-modal information mismatch. In this work, we introduce the concept of Neural Visibility and accordingly propose the EEG-Visible Layer Selection Strategy, aligning EEG signals with intermediate visual layers to minimize this mismatch. Furthermore, to accommodate the multi-stage nature of human visual processing, we propose a novel Hierarchically Complementary Fusion (HCF) framework that jointly integrates visual representations from different hierarchical levels. Extensive experiments demonstrate that our method achieves state-of-the-art performance, reaching an 84.6% accuracy (+21.4%) on zero-shot visual decoding on the THINGS-EEG dataset. Moreover, our method achieves up to a 129.8% performance gain across diverse EEG baselines, demonstrating its robust generalizability.
Problem

Research questions and friction points this paper is trying to address.

EEG
visual decoding
cross-modal mismatch
brain-computer interface
semantic embedding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Visibility
EEG-Visible Layer Selection
Hierarchically Complementary Fusion
brain-vision alignment
visual decoding
🔎 Similar Papers
No similar papers found.
J
Jingyi Tang
Beijing University of Posts and Telecommunications, Beijing, 100876, China
Shuai Jiang
Shuai Jiang
Google
power electronics
F
Fei Su
Beijing University of Posts and Telecommunications, Beijing, 100876, China
Zhicheng Zhao
Zhicheng Zhao
Associate Professor at the School of Artificial Intelligence, Anhui University
Computer Vision