🤖 AI Summary
This work addresses the cross-modal reconstruction of visual images from fMRI signals, focusing on identifying the optimal latent space structure aligned with neural representations. Contrary to dominant pixel-based or generic latent-space approaches, we empirically demonstrate that fMRI responses exhibit stronger alignment with the structured semantic space of pretrained language models. Accordingly, we propose a “text-bridging” paradigm: decoding fMRI signals into structured textual prompts—comprising objects, attributes, and relational descriptions—and subsequently conditioning diffusion models on these prompts for image generation. Our method integrates fMRI-to-text decoding, text-space alignment, object-centric attribute–relation mining, and conditional generative modeling. Evaluated on real fMRI datasets, our approach reduces perceptual loss by 8% over state-of-the-art baselines. It is the first to systematically validate that structured semantic text spaces enhance both the fidelity and interpretability of neural encoding models and reconstructed images.
📝 Abstract
Understanding how the brain encodes visual information is a central challenge in neuroscience and machine learning. A promising approach is to reconstruct visual stimuli, essentially images, from functional Magnetic Resonance Imaging (fMRI) signals. This involves two stages: transforming fMRI signals into a latent space and then using a pretrained generative model to reconstruct images. The reconstruction quality depends on how similar the latent space is to the structure of neural activity and how well the generative model produces images from that space. Yet, it remains unclear which type of latent space best supports this transformation and how it should be organized to represent visual stimuli effectively. We present two key findings. First, fMRI signals are more similar to the text space of a language model than to either a vision based space or a joint text image space. Second, text representations and the generative model should be adapted to capture the compositional nature of visual stimuli, including objects, their detailed attributes, and relationships. Building on these insights, we propose PRISM, a model that Projects fMRI sIgnals into a Structured text space as an interMediate representation for visual stimuli reconstruction. It includes an object centric diffusion module that generates images by composing individual objects to reduce object detection errors, and an attribute relationship search module that automatically identifies key attributes and relationships that best align with the neural activity. Extensive experiments on real world datasets demonstrate that our framework outperforms existing methods, achieving up to an 8% reduction in perceptual loss. These results highlight the importance of using structured text as the intermediate space to bridge fMRI signals and image reconstruction.