🤖 AI Summary
Traditional static receptive field models fail to capture how the brain dynamically routes retinotopically organized visual features to category-selective higher visual areas under natural viewing conditions.
Method: We propose the first Transformer-based neural encoding model that implements dynamic, category-specific routing from retinal input to high-level visual regions (e.g., FFA, PPA). Our approach integrates multimodal neuroimaging response modeling (fMRI/MEG), cross-DNN feature-space adaptation, and attention interpretability analysis—without requiring explicit saliency maps.
Contribution/Results: The model significantly improves prediction accuracy of natural-image-evoked brain activity across both fMRI and MEG modalities, generalizes across diverse DNN feature representations, and reveals—for the first time—a category-driven dynamic attentional routing mechanism. By unifying high predictive performance with mechanistic interpretability, our work advances computational neuroscience and interpretable brain–machine interface design.
📝 Abstract
A major goal of neuroscience is to understand brain computations during visual processing in naturalistic settings. A dominant approach is to use image-computable deep neural networks trained with different task objectives as a basis for linear encoding models. However, in addition to requiring tuning a large number of parameters, the linear encoding approach ignores the structure of the feature maps both in the brain and the models. Recently proposed alternatives have focused on decomposing the linear mapping to spatial and feature components but focus on finding static receptive fields for units that are applicable only in early visual areas. In this work, we employ the attention mechanism used in the transformer architecture to study how retinotopic visual features can be dynamically routed to category-selective areas in high-level visual processing. We show that this computational motif is significantly more powerful than alternative methods in predicting brain activity during natural scene viewing, across different feature basis models and modalities. We also show that this approach is inherently more interpretable, without the need to create importance maps, by interpreting the attention routing signal for different high-level categorical areas. Our approach proposes a mechanistic model of how visual information from retinotopic maps can be routed based on the relevance of the input content to different category-selective regions.