đ€ AI Summary
Existing fMRI datasets suffer from limited semantic breadth and low neural response density, constraining the accuracy of neuro-AI modeling. To address this, we introduce the first high-density fMRI dataset integrating THINGS and CNeuroMod resources, comprising 720 object categories and 4,000 standardized image stimuli. Data were collected using a continuous recognition paradigm across 33â36 high-quality fMRI and behavioral sessions per participant (N=4). Our key innovation lies in the first large-scale integration of semantically rich stimuli with deep repeated measurementsâenabling enhanced stability, reproducibility, and cross-concept generalizability of neural representations. This dataset constitutes the most densely sampled and semantically comprehensive visual cognition benchmark to date, specifically designed to train advanced neuro-AI modelsâincluding brain-image alignment modelsâthereby advancing the empirical foundation for interpretable, cognitively grounded artificial intelligence.
đ Abstract
Data-hungry neuro-AI modelling requires ever larger neuroimaging datasets. CNeuroMod-THINGS meets this need by capturing neural representations for a wide set of semantic concepts using well-characterized stimuli in a new densely-sampled, large-scale fMRI dataset. Importantly, CNeuroMod-THINGS exploits synergies between two existing projects: the THINGS initiative (THINGS) and the Courtois Project on Neural Modelling (CNeuroMod). THINGS has developed a common set of thoroughly annotated images broadly sampling natural and man-made objects which is used to acquire a growing collection of large-scale multimodal neural responses. Meanwhile, CNeuroMod is acquiring hundreds of hours of fMRI data from a core set of participants during controlled and naturalistic tasks, including visual tasks like movie watching and videogame playing. For CNeuroMod-THINGS, four CNeuroMod participants each completed 33-36 sessions of a continuous recognition paradigm using approximately 4000 images from the THINGS stimulus set spanning 720 categories. We report behavioural and neuroimaging metrics that showcase the quality of the data. By bridging together large existing resources, CNeuroMod-THINGS expands our capacity to model broad slices of the human visual experience.