From Prediction to Understanding: Will AI Foundation Models Transform Brain Science?

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI foundation models in neuroscience are limited to predictive modeling of neural activity, lacking interpretability and mechanistic insight into brain computation and cognition. Method: We propose a novel generative pretraining paradigm integrating large-scale unsupervised learning, cross-domain transfer, and deep neural architectures to develop a multitask foundation model for neuroscience. Contribution/Results: The model achieves high-fidelity neural activity prediction while establishing, for the first time, systematic, interpretable mappings between its internal representations and both neural coding principles and cognitive functions. It provides a candidate unified computational framework for neuroscience and clarifies key challenges and viable pathways for interpretable modeling. This work marks a paradigm shift from data-driven prediction toward mechanism-driven explanation in computational neuroscience.

Technology Category

Application Category

📝 Abstract
Generative pretraining (the "GPT" in ChatGPT) enables language models to learn from vast amounts of internet text without human supervision. This approach has driven breakthroughs across AI by allowing deep neural networks to learn from massive, unstructured datasets. We use the term foundation models to refer to large pretrained systems that can be adapted to a wide range of tasks within and across domains, and these models are increasingly applied beyond language to the brain sciences. These models achieve strong predictive accuracy, raising hopes that they might illuminate computational principles. But predictive success alone does not guarantee scientific understanding. Here, we outline how foundation models can be productively integrated into the brain sciences, highlighting both their promise and their limitations. The central challenge is to move from prediction to explanation: linking model computations to mechanisms underlying neural activity and cognition.
Problem

Research questions and friction points this paper is trying to address.

Moving from predictive accuracy to scientific understanding in brain science
Linking AI model computations to neural activity mechanisms and cognition
Productively integrating foundation models while addressing their limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using generative pretraining for unsupervised learning
Adapting foundation models across multiple scientific domains
Linking model computations to neural mechanisms
🔎 Similar Papers
No similar papers found.
T
Thomas Serre
Departments of Cognitive & Psychological Sciences and Computer Science, Carney Center for Computational Brain Science, Brown University
Ellie Pavlick
Ellie Pavlick
Brown University
Natural Language Processing