🤖 AI Summary
Existing representational alignment methods—such as Representational Similarity Analysis (RSA), Centered Kernel Alignment (CKA), and linear regression—systematically underestimate true similarity when neural representations reside in superposition. This bias stems from conflating *what* is represented with *how* it is represented, leading systems that share identical latent features to be erroneously deemed dissimilar. This work is the first to uncover this mechanism and argues that alignment should be based on recoverable sparse latent features rather than raw, mixed activations. Leveraging compressive sensing theory, random projection analysis, and closed-form derivations, we prove that under sparsity assumptions, the original features can be exactly reconstructed, whereas ignoring the superposition structure yields substantially deflated alignment scores and even incorrect similarity rankings. Our findings establish a new paradigm for evaluating representational similarity in the presence of superposition.
📝 Abstract
Comparing the internal representations of neural networks is a central goal in both neuroscience and machine learning. Standard alignment metrics operate on raw neural activations, implicitly assuming that similar representations produce similar activity patterns. However, neural systems frequently operate in superposition, encoding more features than they have neurons via linear compression. We derive closed-form expressions showing that superposition systematically deflates Representational Similarity Analysis, Centered Kernel Alignment, and linear regression, causing networks with identical feature content to appear dissimilar. The root cause is that these metrics are dependent on cross-similarity between two systems' respective superposition matrices, which under assumption of random projection usually differ significantly, not on the latent features themselves: alignment scores conflate what a system represents with how it represents it. Under partial feature overlap, this confound can invert the expected ordering, making systems sharing fewer features appear more aligned than systems sharing more. Crucially, the apparent misalignment need not reflect a loss of information; compressed sensing guarantees that the original features remain recoverable from the lower-dimensional activity, provided they are sparse. We therefore argue that comparing neural systems in superposition requires extracting and aligning the underlying features rather than comparing the raw neural mixtures.