CNeuroMod-THINGS, a densely-sampled fMRI dataset for visual neuroscience

📅 2025-07-11
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Existing fMRI datasets suffer from limited semantic breadth and low neural response density, constraining the accuracy of neuro-AI modeling. To address this, we introduce the first high-density fMRI dataset integrating THINGS and CNeuroMod resources, comprising 720 object categories and 4,000 standardized image stimuli. Data were collected using a continuous recognition paradigm across 33–36 high-quality fMRI and behavioral sessions per participant (N=4). Our key innovation lies in the first large-scale integration of semantically rich stimuli with deep repeated measurements—enabling enhanced stability, reproducibility, and cross-concept generalizability of neural representations. This dataset constitutes the most densely sampled and semantically comprehensive visual cognition benchmark to date, specifically designed to train advanced neuro-AI models—including brain-image alignment models—thereby advancing the empirical foundation for interpretable, cognitively grounded artificial intelligence.

Technology Category

Application Category

📝 Abstract
Data-hungry neuro-AI modelling requires ever larger neuroimaging datasets. CNeuroMod-THINGS meets this need by capturing neural representations for a wide set of semantic concepts using well-characterized stimuli in a new densely-sampled, large-scale fMRI dataset. Importantly, CNeuroMod-THINGS exploits synergies between two existing projects: the THINGS initiative (THINGS) and the Courtois Project on Neural Modelling (CNeuroMod). THINGS has developed a common set of thoroughly annotated images broadly sampling natural and man-made objects which is used to acquire a growing collection of large-scale multimodal neural responses. Meanwhile, CNeuroMod is acquiring hundreds of hours of fMRI data from a core set of participants during controlled and naturalistic tasks, including visual tasks like movie watching and videogame playing. For CNeuroMod-THINGS, four CNeuroMod participants each completed 33-36 sessions of a continuous recognition paradigm using approximately 4000 images from the THINGS stimulus set spanning 720 categories. We report behavioural and neuroimaging metrics that showcase the quality of the data. By bridging together large existing resources, CNeuroMod-THINGS expands our capacity to model broad slices of the human visual experience.
Problem

Research questions and friction points this paper is trying to address.

Addresses need for large-scale fMRI datasets in neuro-AI modeling
Bridges THINGS and CNeuroMod projects to enhance visual neuroscience research
Provides densely-sampled neural data for diverse semantic concepts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines THINGS and CNeuroMod datasets
Uses densely-sampled fMRI for visual tasks
Leverages 4000 annotated images across 720 categories
🔎 Similar Papers
No similar papers found.
M
Marie St-Laurent
Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Centre de recherche de l’Institut universitaire de gĂ©riatrie de MontrĂ©al, MontrĂ©al, Canada
B
Basile Pinsard
Centre de recherche de l’Institut universitaire de gĂ©riatrie de MontrĂ©al, MontrĂ©al, Canada
Oliver Contier
Oliver Contier
Vision and Computational Cognition lab, Max Planck Institute for Human Cognitive and Brain Sciences
PsychologyCognitive NeurosciencefMRI
E
Elizabeth DuPre
Centre de recherche de l’Institut universitaire de gĂ©riatrie de MontrĂ©al, MontrĂ©al, Canada; DĂ©partement de psychologie, UniversitĂ© de MontrĂ©al, MontrĂ©al, Canada
K
Katja Seeliger
Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
V
Valentina Borghesani
Centre de recherche de l’Institut universitaire de gĂ©riatrie de MontrĂ©al, MontrĂ©al, Canada; FacultĂ© de psychologie et des sciences de l’éducation, UniversitĂ© de GenĂšve, GenĂšve, Swizerland
J
Julie A. Boyle
Centre de recherche de l’Institut universitaire de gĂ©riatrie de MontrĂ©al, MontrĂ©al, Canada
L
Lune Bellec
Centre de recherche de l’Institut universitaire de gĂ©riatrie de MontrĂ©al, MontrĂ©al, Canada; DĂ©partement de psychologie, UniversitĂ© de MontrĂ©al, MontrĂ©al, Canada
Martin N. Hebart
Martin N. Hebart
Justus Liebig University Giessen / Max Planck CBS Leipzig, Germany
Visual perceptionmultivariate pattern analysiscognitive computational neuroscienceNeuroAI