Representation biases: will we achieve complete understanding by analyzing representations?

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a pervasive representational bias in neural representation analysis: models systematically over-represent simple features while under- or inconsistently representing complex ones, thereby distorting mainstream analytical outcomes—including PCA, linear regression, and representational similarity analysis (RSA)—and leading to erroneous inferences about neural computational mechanisms. Through theoretical modeling and empirical case studies—including analyses within the homomorphic encryption paradigm—we provide the first experimental evidence that representation and computation can be strongly dissociated: similar representations do not imply similar computations, nor vice versa. This finding challenges the implicit assumption that representation analysis alone suffices for system-level understanding of neural function. It establishes critical methodological caveats for cross-regional, cross-task, and cross-species representational comparisons, and advocates a shift in neural decoding paradigms—from static representation characterization toward dynamic computational modeling.

Technology Category

Application Category

📝 Abstract
A common approach in neuroscience is to study neural representations as a means to understand a system -- increasingly, by relating the neural representations to the internal representations learned by computational models. However, a recent work in machine learning (Lampinen, 2024) shows that learned feature representations may be biased to over-represent certain features, and represent others more weakly and less-consistently. For example, simple (linear) features may be more strongly and more consistently represented than complex (highly nonlinear) features. These biases could pose challenges for achieving full understanding of a system through representational analysis. In this perspective, we illustrate these challenges -- showing how feature representation biases can lead to strongly biased inferences from common analyses like PCA, regression, and RSA. We also present homomorphic encryption as a simple case study of the potential for strong dissociation between patterns of representation and computation. We discuss the implications of these results for representational comparisons between systems, and for neuroscience more generally.
Problem

Research questions and friction points this paper is trying to address.

Neural representations may over-represent certain biased features
Biases challenge full system understanding via representational analysis
Representation-computation dissociation affects neuroscience comparisons
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing neural and computational model representations
Highlighting biases in feature representation analysis
Exploring homomorphic encryption for representation-computation dissociation
🔎 Similar Papers
No similar papers found.
Andrew Kyle Lampinen
Andrew Kyle Lampinen
Research Scientist, DeepMind
deep learningcognitionlanguagegeneralization
S
Stephanie C. Y. Chan
Google DeepMind
Y
Yuxuan Li
Google DeepMind
K
Katherine Hermann
Google DeepMind