🤖 AI Summary
This work addresses the cross-modal mismatch between EEG signals and high-level semantic embeddings from deep visual models by introducing the concept of “neural visibility.” The authors propose an EEG-visible layer selection strategy to align brain activity with intermediate layers of visual models, reflecting the hierarchical nature of human visual processing. They further develop a Hierarchical Complementary Fusion (HCF) framework that integrates multi-level visual representations to better capture the structure of neural responses. This approach achieves the first structured alignment between EEG and visual features, yielding a zero-shot visual decoding accuracy of 84.6% on the THINGS-EEG dataset—representing a 21.4% improvement over the baseline—and demonstrates an average performance gain of 129.8% across multiple EEG benchmarks.
📝 Abstract
Visual decoding from electroencephalography (EEG) has emerged as a highly promising avenue for non-invasive brain-computer interfaces (BCIs). Existing EEG-based decoding methods predominantly align brain signals with the final-layer semantic embeddings of deep visual models. However, relying on these highly abstracted embeddings inevitably leads to severe cross-modal information mismatch. In this work, we introduce the concept of Neural Visibility and accordingly propose the EEG-Visible Layer Selection Strategy, aligning EEG signals with intermediate visual layers to minimize this mismatch. Furthermore, to accommodate the multi-stage nature of human visual processing, we propose a novel Hierarchically Complementary Fusion (HCF) framework that jointly integrates visual representations from different hierarchical levels. Extensive experiments demonstrate that our method achieves state-of-the-art performance, reaching an 84.6% accuracy (+21.4%) on zero-shot visual decoding on the THINGS-EEG dataset. Moreover, our method achieves up to a 129.8% performance gain across diverse EEG baselines, demonstrating its robust generalizability.