Causality is Key for Interpretability Claims to Generalise

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current research on the interpretability of large language models often struggles to generalize due to a lack of causal evidence. This work proposes a diagnostic framework that systematically integrates Pearl’s causal hierarchy and causal representation learning to clarify which variables can be recovered from model activations and under what assumptions. By aligning interpretability claims with the required evidence through interventions—such as ablation and activation patching—and counterfactual analysis, the framework not only guides the selection of interpretability methods and the design of evaluations but also substantially enhances the generalizability of interpretability findings and the credibility of causal assertions.

Technology Category

Application Category

📝 Abstract
Interpretability research on large language models (LLMs) has yielded important insights into model behaviour, yet recurring pitfalls persist: findings that do not generalise, and causal interpretations that outrun the evidence. Our position is that causal inference specifies what constitutes a valid mapping from model activations to invariant high-level structures, the data or assumptions needed to achieve it, and the inferences it can support. Specifically, Pearl's causal hierarchy clarifies what an interpretability study can justify. Observations establish associations between model behaviour and internal components. Interventions (e.g., ablations or activation patching) support claims how these edits affect a behavioural metric (\eg, average change in token probabilities) over a set of prompts. However, counterfactual claims -- i.e., asking what the model output would have been for the same prompt under an unobserved intervention -- remain largely unverifiable without controlled supervision. We show how causal representation learning (CRL) operationalises this hierarchy, specifying which variables are recoverable from activations and under what assumptions. Together, these motivate a diagnostic framework that helps practitioners select methods and evaluations matching claims to evidence such that findings generalise.
Problem

Research questions and friction points this paper is trying to address.

interpretability
causality
generalisation
large language models
causal inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal inference
interpretability
causal representation learning
large language models
generalization
🔎 Similar Papers
No similar papers found.