🤖 AI Summary
Existing representation similarity methods align network layers independently per layer, resulting in asymmetric alignments, absence of global scoring, and poor generalizability to deep heterogeneous models—primarily due to neglecting global activation structure and enforcing rigid one-to-one neuron mappings. To address this, we propose the Hierarchical Optimal Transport (HOT) framework, which jointly models multiple layers under marginal constraints to enable soft, hierarchical cross-model representation alignment. HOT allows source neurons to distribute activation mass across multiple target layers, yielding globally consistent alignment scores and neuron-level transport plans that naturally uncover fine-grained, cross-layer correspondences. Evaluated on vision models, large language models, and human fMRI data, HOT consistently outperforms state-of-the-art baselines, producing smooth, structurally coherent hierarchical mappings.
📝 Abstract
Standard representational similarity methods align each layer of a network to its best match in another independently, producing asymmetric results, lacking a global alignment score, and struggling with networks of different depths. These limitations arise from ignoring global activation structure and restricting mappings to rigid one-to-one layer correspondences. We propose Hierarchical Optimal Transport (HOT), a unified framework that jointly infers soft, globally consistent layer-to-layer couplings and neuron-level transport plans. HOT allows source neurons to distribute mass across multiple target layers while minimizing total transport cost under marginal constraints. This yields both a single alignment score for the entire network comparison and a soft transport plan that naturally handles depth mismatches through mass distribution. We evaluate HOT on vision models, large language models, and human visual cortex recordings. Across all domains, HOT matches or surpasses standard pairwise matching in alignment quality. Moreover, it reveals smooth, fine-grained hierarchical correspondences: early layers map to early layers, deeper layers maintain relative positions, and depth mismatches are resolved by distributing representations across multiple layers. These structured patterns emerge naturally from global optimization without being imposed, yet are absent in greedy layer-wise methods. HOT thus enables richer, more interpretable comparisons between representations, particularly when networks differ in architecture or depth.