Partial Soft-Matching Distance for Neural Representational Comparison with Partial Unit Correspondence

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a soft matching distance based on partial optimal transport for comparing neural representations, addressing the limitations of traditional similarity measures that require full neuron-to-neuron correspondence and are thus sensitive to noise and incapable of handling partial alignments. By relaxing the mass conservation constraint inherent in standard optimal transport, the method enables matching only the most consistent subset of units, thereby enhancing robustness and interpretability while preserving sensitivity to rotational transformations. The approach facilitates efficient neuron alignment and automatically filters out low-reliability voxels in both simulated and real fMRI data, improving alignment accuracy. When applied to deep neural networks, it identifies highly consistent neuron subsets whose matches closely align with those obtained via exhaustive search, yet at substantially lower computational cost.

Technology Category

Application Category

📝 Abstract
Representational similarity metrics typically force all units to be matched, making them susceptible to noise and outliers common in neural representations. We extend the soft-matching distance to a partial optimal transport setting that allows some neurons to remain unmatched, yielding rotation-sensitive but robust correspondences. This partial soft-matching distance provides theoretical advantages -- relaxing strict mass conservation while maintaining interpretable transport costs -- and practical benefits through efficient neuron ranking in terms of cross-network alignment without costly iterative recomputation. In simulations, it preserves correct matches under outliers and reliably selects the correct model in noise-corrupted identification tasks. On fMRI data, it automatically excludes low-reliability voxels and produces voxel rankings by alignment quality that closely match computationally expensive brute-force approaches. It achieves higher alignment precision across homologous brain areas than standard soft-matching, which is forced to match all units regardless of quality. In deep networks, highly matched units exhibit similar maximally exciting images, while unmatched units show divergent patterns. This ability to partition by match quality enables focused analyses, e.g., testing whether networks have privileged axes even within their most aligned subpopulations. Overall, partial soft-matching provides a principled and practical method for representational comparison under partial correspondence.
Problem

Research questions and friction points this paper is trying to address.

representational similarity
partial correspondence
neural representations
outliers
unit matching
Innovation

Methods, ideas, or system contributions that make the work stand out.

partial optimal transport
soft-matching distance
representational similarity
neural alignment
robust correspondence
🔎 Similar Papers
No similar papers found.
C
Chaitanya Kapoor
Department of Cognitive Science, University of California, San Diego
A
Alex H. Williams
Center for Neural Science, New York University; Center for Computational Neuroscience, Flatiron Institute
Meenakshi Khosla
Meenakshi Khosla
UC San Diego
Computational NeuroscienceArtificial IntelligenceVisionAuditionLanguage