🤖 AI Summary
Assessing feature universality—whether distinct large language models (LLMs) encode semantic concepts similarly—remains challenging due to neuron polysemy and architectural heterogeneity.
Method: To enable cross-model comparison, we propose a disentangled framework based on sparse autoencoders (SAEs): first, we induce interpretable, sparse feature spaces from intermediate layers of multiple mainstream LLMs; second, we align features across models via activation correlation; finally, we quantify representational similarity using canonical correlation analysis (CCA) and its singular-value variant (SVCCA).
Contribution/Results: We provide the first systematic empirical validation of strong structural consistency across SAE feature spaces in diverse LLMs. Our activation-correlation–driven feature matching paradigm achieves significantly higher accuracy than random baselines. Results robustly confirm the existence of strong cross-model generalization in latent semantic representations, offering both theoretical grounding and practical tools for interpretable AI and cross-model knowledge transfer.
📝 Abstract
We investigate feature universality in large language models (LLMs), a research field that aims to understand how different models similarly represent concepts in the latent spaces of their intermediate layers. Demonstrating feature universality allows discoveries about latent representations to generalize across several models. However, comparing features across LLMs is challenging due to polysemanticity, in which individual neurons often correspond to multiple features rather than distinct ones, making it difficult to disentangle and match features across different models. To address this issue, we employ a method known as dictionary learning by using sparse autoencoders (SAEs) to transform LLM activations into more interpretable spaces spanned by neurons corresponding to individual features. After matching feature neurons across models via activation correlation, we apply representational space similarity metrics on SAE feature spaces across different LLMs. Our experiments reveal significant similarities in SAE feature spaces across various LLMs, providing new evidence for feature universality.