Integrated representational signatures strengthen specificity in brains and models

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A fundamental question at the intersection of neuroscience and machine learning is whether brain regions and artificial neural networks (ANNs) performing similar functions rely on equivalent neural representations. Prior work typically employs single representational similarity metrics, limiting comprehensive characterization of representational structure. To address this, we propose a multi-dimensional representational similarity fusion framework that jointly evaluates geometric structure, unit tuning properties, and linear decodability by integrating Representational Similarity Analysis (RSA), Soft Matching, and Linear Predictivity. Moreover, we introduce Similarity Network Fusion (SNF)—a multi-omics–inspired technique—into cross-modal brain–model comparison for the first time. Our framework substantially improves discriminability among brain regions and ANN model families, yielding a robust composite similarity map. Clustering of this map aligns closely with the established anatomical–functional hierarchy of visual cortex, establishing a scalable, multi-perspective paradigm for cross-system representational comparison.

Technology Category

Application Category

📝 Abstract
The extent to which different neural or artificial neural networks (models) rely on equivalent representations to support similar tasks remains a central question in neuroscience and machine learning. Prior work has typically compared systems using a single representational similarity metric, yet each captures only one facet of representational structure. To address this, we leverage a suite of representational similarity metrics-each capturing a distinct facet of representational correspondence, such as geometry, unit-level tuning, or linear decodability-and assess brain region or model separability using multiple complementary measures. Metrics that preserve geometric or tuning structure (e.g., RSA, Soft Matching) yield stronger region-based discrimination, whereas more flexible mappings such as Linear Predictivity show weaker separation. These findings suggest that geometry and tuning encode brain-region- or model-family-specific signatures, while linearly decodable information tends to be more globally shared across regions or models. To integrate these complementary representational facets, we adapt Similarity Network Fusion (SNF), a framework originally developed for multi-omics data integration. SNF produces substantially sharper regional and model family-level separation than any single metric and yields robust composite similarity profiles. Moreover, clustering cortical regions using SNF-derived similarity scores reveals a clearer hierarchical organization that aligns closely with established anatomical and functional hierarchies of the visual cortex-surpassing the correspondence achieved by individual metrics.
Problem

Research questions and friction points this paper is trying to address.

Comparing neural representations across brain regions and models
Integrating multiple similarity metrics to enhance specificity
Revealing hierarchical organization in visual cortex representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging multiple representational similarity metrics
Adapting Similarity Network Fusion framework
Integrating complementary facets for clearer separation
🔎 Similar Papers
No similar papers found.