Sparse Autoencoders Can Interpret Randomly Initialized Transformers

📅 2025-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
It remains unclear whether the mechanistic interpretability of Transformer models fundamentally depends on training, or whether structural properties inherent to the architecture itself suffice. Method: We investigate whether randomly initialized Transformers—whose parameters follow a Gaussian distribution and have undergone no training—can be effectively interpreted using sparse autoencoders (SAEs). Leveraging an open-source, automated interpretability evaluation pipeline, we conduct systematic cross-layer, multi-scale, and multi-model-size comparisons. Contribution/Results: We find that SAE latent features extracted from random Transformers match their trained counterparts closely across key metrics: semantic coherence, L0 sparsity, reconstruction error, and feature one-hotness. This is the first systematic demonstration that mechanistic interpretability need not arise from learning, challenging the implicit assumption that “interpretability emerges only through training.” Our findings establish a novel zero-shot interpretability benchmark and reveal that the Transformer architecture intrinsically possesses structural interpretability potential.

Technology Category

Application Category

📝 Abstract
Sparse autoencoders (SAEs) are an increasingly popular technique for interpreting the internal representations of transformers. In this paper, we apply SAEs to 'interpret' random transformers, i.e., transformers where the parameters are sampled IID from a Gaussian rather than trained on text data. We find that random and trained transformers produce similarly interpretable SAE latents, and we confirm this finding quantitatively using an open-source auto-interpretability pipeline. Further, we find that SAE quality metrics are broadly similar for random and trained transformers. We find that these results hold across model sizes and layers. We discuss a number of number interesting questions that this work raises for the use of SAEs and auto-interpretability in the context of mechanistic interpretability.
Problem

Research questions and friction points this paper is trying to address.

Randomly Initialized Transformer Models
Sparse Autoencoders
Parameter Comparison
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Autoencoders
Model Interpretability
Complex Model Understanding
🔎 Similar Papers
No similar papers found.