Representational Alignment Across Model Layers and Brain Regions with Hierarchical Optimal Transport

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing representation similarity methods align network layers independently per layer, resulting in asymmetric alignments, absence of global scoring, and poor generalizability to deep heterogeneous models—primarily due to neglecting global activation structure and enforcing rigid one-to-one neuron mappings. To address this, we propose the Hierarchical Optimal Transport (HOT) framework, which jointly models multiple layers under marginal constraints to enable soft, hierarchical cross-model representation alignment. HOT allows source neurons to distribute activation mass across multiple target layers, yielding globally consistent alignment scores and neuron-level transport plans that naturally uncover fine-grained, cross-layer correspondences. Evaluated on vision models, large language models, and human fMRI data, HOT consistently outperforms state-of-the-art baselines, producing smooth, structurally coherent hierarchical mappings.

Technology Category

Application Category

📝 Abstract
Standard representational similarity methods align each layer of a network to its best match in another independently, producing asymmetric results, lacking a global alignment score, and struggling with networks of different depths. These limitations arise from ignoring global activation structure and restricting mappings to rigid one-to-one layer correspondences. We propose Hierarchical Optimal Transport (HOT), a unified framework that jointly infers soft, globally consistent layer-to-layer couplings and neuron-level transport plans. HOT allows source neurons to distribute mass across multiple target layers while minimizing total transport cost under marginal constraints. This yields both a single alignment score for the entire network comparison and a soft transport plan that naturally handles depth mismatches through mass distribution. We evaluate HOT on vision models, large language models, and human visual cortex recordings. Across all domains, HOT matches or surpasses standard pairwise matching in alignment quality. Moreover, it reveals smooth, fine-grained hierarchical correspondences: early layers map to early layers, deeper layers maintain relative positions, and depth mismatches are resolved by distributing representations across multiple layers. These structured patterns emerge naturally from global optimization without being imposed, yet are absent in greedy layer-wise methods. HOT thus enables richer, more interpretable comparisons between representations, particularly when networks differ in architecture or depth.
Problem

Research questions and friction points this paper is trying to address.

Aligning neural network layers across different models and brain regions
Overcoming limitations of asymmetric layer-wise similarity methods
Handling depth mismatches through global hierarchical transport optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses hierarchical optimal transport for global alignment
Enables soft neuron mappings across multiple layers
Handles depth mismatches through mass distribution
🔎 Similar Papers
No similar papers found.
S
Shaan Shah
Department of Electrical and Computer Engineering, University of California, San Diego
Meenakshi Khosla
Meenakshi Khosla
UC San Diego
Computational NeuroscienceArtificial IntelligenceVisionAuditionLanguage