From Isolation to Entanglement: When Do Interpretability Methods Identify and Disentangle Known Concepts?

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether sparse autoencoders (SAEs) and sparse linear probes can reliably disentangle and localize causally relevant semantic concepts—such as sentiment, domain, or tense—when concepts exhibit controlled inter-concept correlations. Method: We introduce the first evaluation framework that explicitly manipulates multi-concept correlations, integrating subspace projection analysis, feature steering interventions, and quantitative disentanglement metrics. Contribution/Results: We find that (1) the mapping from concepts to features is many-to-one, rendering conventional correlation-based disentanglement metrics insufficient for guaranteeing steering independence; (2) while individual features lack concept selectivity, their causal effects are confined to orthogonal subspaces; and (3) reliable interpretability assessment requires combinatorial, intervention-driven evaluation rather than isolated metrics. Our framework establishes a new paradigm and empirical benchmark for rigorously validating the reliability of interpretability methods in language models.

Technology Category

Application Category

📝 Abstract
A central goal of interpretability is to recover representations of causally relevant concepts from the activations of neural networks. The quality of these concept representations is typically evaluated in isolation, and under implicit independence assumptions that may not hold in practice. Thus, it is unclear whether common featurization methods - including sparse autoencoders (SAEs) and sparse probes - recover disentangled representations of these concepts. This study proposes a multi-concept evaluation setting where we control the correlations between textual concepts, such as sentiment, domain, and tense, and analyze performance under increasing correlations between them. We first evaluate the extent to which featurizers can learn disentangled representations of each concept under increasing correlational strengths. We observe a one-to-many relationship from concepts to features: features correspond to no more than one concept, but concepts are distributed across many features. Then, we perform steering experiments, measuring whether each concept is independently manipulable. Even when trained on uniform distributions of concepts, SAE features generally affect many concepts when steered, indicating that they are neither selective nor independent; nonetheless, features affect disjoint subspaces. These results suggest that correlational metrics for measuring disentanglement are generally not sufficient for establishing independence when steering, and that affecting disjoint subspaces is not sufficient for concept selectivity. These results underscore the importance of compositional evaluations in interpretability research.
Problem

Research questions and friction points this paper is trying to address.

Evaluates concept disentanglement in neural network interpretability methods
Assesses feature independence under controlled concept correlations
Examines concept selectivity and manipulation in steering experiments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-concept evaluation setting controls textual concept correlations
Analyzes featurizer performance under increasing concept correlation strengths
Conducts steering experiments to measure independent concept manipulability
🔎 Similar Papers
No similar papers found.