Position: Mechanistic Interpretability Should Prioritize Feature Consistency in SAEs

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse autoencoders (SAEs) suffer from feature inconsistency across training runs—a critical bottleneck in mechanistic interpretability (MI) that undermines the reliability and reproducibility of interpretations. To address this, we propose—*for the first time*—treating **feature consistency** as a primary optimization objective, superseding conventional reconstruction and sparsity goals. We introduce the **Pairwise Dictionary Mean Correlation Coefficient (PW-MCC)**, a differentiable and robust metric for quantifying consistency across SAE dictionaries. Our methodology integrates a three-tier empirical framework: theoretical analysis, validation on synthetic models, and application to real LLM activations. On LLM hidden-layer activations, our approach achieves a PW-MCC of 0.80—marking substantial improvement in cross-run feature alignment—and yields significantly enhanced semantic interpretability. This work establishes feature consistency as a foundational principle in SAE training and introduces PW-MCC as a new standard for evaluating consistency in MI research.

Technology Category

Application Category

📝 Abstract
Sparse Autoencoders (SAEs) are a prominent tool in mechanistic interpretability (MI) for decomposing neural network activations into interpretable features. However, the aspiration to identify a canonical set of features is challenged by the observed inconsistency of learned SAE features across different training runs, undermining the reliability and efficiency of MI research. This position paper argues that mechanistic interpretability should prioritize feature consistency in SAEs -- the reliable convergence to equivalent feature sets across independent runs. We propose using the Pairwise Dictionary Mean Correlation Coefficient (PW-MCC) as a practical metric to operationalize consistency and demonstrate that high levels are achievable (0.80 for TopK SAEs on LLM activations) with appropriate architectural choices. Our contributions include detailing the benefits of prioritizing consistency; providing theoretical grounding and synthetic validation using a model organism, which verifies PW-MCC as a reliable proxy for ground-truth recovery; and extending these findings to real-world LLM data, where high feature consistency strongly correlates with the semantic similarity of learned feature explanations. We call for a community-wide shift towards systematically measuring feature consistency to foster robust cumulative progress in MI.
Problem

Research questions and friction points this paper is trying to address.

SAEs lack consistent features across training runs
Feature inconsistency undermines mechanistic interpretability reliability
Need metrics to ensure SAE feature consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prioritize feature consistency in SAEs
Use PW-MCC metric for consistency measurement
Achieve high consistency with architectural choices
🔎 Similar Papers
No similar papers found.