π€ AI Summary
This work addresses the challenge of aligning brain signals with visual features, which is hindered by modality gaps and the entanglement of semantic and perceptual visual attributes. To this end, it introduces hyperbolic space into brainβvision alignment for the first time. By leveraging geodesic interpolation in hyperbolic geometry, the method effectively fuses and compresses semantic and perceptual visual features, better matching the limited representational capacity and coupled nature of neural signals. Integrated with pretrained vision models and EEG/MEG signal processing, the approach achieves state-of-the-art performance in zero-shot brain-to-image retrieval, improving Top-1 accuracy by 17.3% on the THINGS-EEG dataset and by 9.1% on THINGS-MEG.
π Abstract
Recent progress in artificial intelligence has encouraged numerous attempts to understand and decode human visual system from brain signals. These prior works typically align neural activity independently with semantic and perceptual features extracted from images using pre-trained vision models. However, they fail to account for two key challenges: (1) the modality gap arising from the natural difference in the information level of representation between brain signals and images, and (2) the fact that semantic and perceptual features are highly entangled within neural activity. To address these issues, we utilize hyperbolic space, which is well-suited for considering differences in the amount of information and has the geometric property that geodesics between two points naturally bend toward the origin, where the representational capacity is lower. Leveraging these properties, we propose a novel framework, Hyperbolic Feature Interpolation (HyFI), which interpolates between semantic and perceptual visual features along hyperbolic geodesics. This enables both the fusion and compression of perceptual and semantic information, effectively reflecting the limited expressiveness of brain signals and the entangled nature of these features. As a result, it facilitates better alignment between brain and visual features. We demonstrate that HyFI achieves state-of-the-art performance in zero-shot brain-to-image retrieval, outperforming prior methods with Top-1 accuracy improvements of up to +17.3% on THINGS-EEG and +9.1% on THINGS-MEG.