Scaling Laws for Task-Optimized Models of the Primate Visual Ventral Stream

📅 2024-11-08
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Understanding how scaling laws—model size, dataset scale, and compute—affect neural and behavioral alignment between artificial neural networks (ANNs) and the primate ventral visual stream (V1–IT) remains unresolved. Method: We conduct a large-scale, controlled training study across 600+ models, evaluating neural response prediction accuracy across multiple visual areas and behavioral performance on standardized benchmarks. Contribution/Results: We find that behavioral alignment improves monotonically with scale, whereas neural alignment saturates across most ventral stream regions—especially higher-order areas—indicating fundamental limits of current scaling paradigms. Critically, increased scale enhances behavioral but not neural fidelity; instead, high-quality data and strong architectural inductive biases significantly improve modeling efficiency and neural predictivity. These results reveal a key bottleneck in AI–neuroscience convergence: scaling alone is insufficient for accurate neuroscientific modeling, necessitating new paradigms grounded in biological constraints and structured learning priors.

Technology Category

Application Category

📝 Abstract
When trained on large-scale object classification datasets, certain artificial neural network models begin to approximate core object recognition behaviors and neural response patterns in the primate brain. While recent machine learning advances suggest that scaling compute, model size, and dataset size improves task performance, the impact of scaling on brain alignment remains unclear. In this study, we explore scaling laws for modeling the primate visual ventral stream by systematically evaluating over 600 models trained under controlled conditions on benchmarks spanning V1, V2, V4, IT and behavior. We find that while behavioral alignment continues to scale with larger models, neural alignment saturates. This observation remains true across model architectures and training datasets, even though models with stronger inductive biases and datasets with higher-quality images are more compute-efficient. Increased scaling is especially beneficial for higher-level visual areas, where small models trained on few samples exhibit only poor alignment. Our results suggest that while scaling current architectures and datasets might suffice for alignment with human core object recognition behavior, it will not yield improved models of the brain's visual ventral stream, highlighting the need for novel strategies in building brain models.
Problem

Research questions and friction points this paper is trying to address.

Investigating scaling laws' impact on neural alignment in primate visual models
Evaluating 600+ models to determine neural versus behavioral alignment saturation
Identifying limitations of current scaling approaches for brain ventral stream modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically evaluating over 600 trained models
Finding neural alignment saturates with scaling
Highlighting need for novel brain modeling strategies
🔎 Similar Papers
No similar papers found.