Addressing divergent representations from causal interventions on neural networks

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals that causal interventions in neural networks often induce distributional shifts in external representations, degrading explanation fidelity. To address this, we propose a novel perspective distinguishing “harmless” from “harmful” representation shifts, and design a causal intervention framework grounded in counterfactual latent loss (CL loss), augmented with a distribution-aware regularization mechanism that explicitly suppresses activation of harmful causal pathways. Theoretical analysis and experiments demonstrate that common interventions significantly deviate from the model’s natural representation distribution; our method reduces harmful shifts by up to 37% on benchmarks including ImageNet, while preserving—or even improving—attribution accuracy and consistency. Our core contributions are threefold: (i) the first systematic characterization of intervention-induced representation shift types; (ii) establishment of verifiable fidelity constraints; and (iii) a new interpretability-enhancement paradigm that jointly ensures causal plausibility and distributional consistency.

Technology Category

Application Category

📝 Abstract
A common approach to mechanistic interpretability is to causally manipulate model representations via targeted interventions in order to understand what those representations encode. Here we ask whether such interventions create out-of-distribution (divergent) representations, and whether this raises concerns about how faithful their resulting explanations are to the target model in its natural state. First, we demonstrate empirically that common causal intervention techniques often do shift internal representations away from the natural distribution of the target model. Then, we provide a theoretical analysis of two classes of such divergences:"harmless"divergences that occur in the null-space of the weights and from covariance within behavioral decision boundaries, and"pernicious"divergences that activate hidden network pathways and cause dormant behavioral changes. Finally, in an effort to mitigate the pernicious cases, we modify the Counterfactual Latent (CL) loss from Grant (2025) that regularizes interventions to remain closer to the natural distributions, reducing the likelihood of harmful divergences while preserving the interpretive power of interventions. Together, these results highlight a path towards more reliable interpretability methods.
Problem

Research questions and friction points this paper is trying to address.

Investigating whether causal interventions create out-of-distribution representations in neural networks
Analyzing harmless versus pernicious divergences caused by intervention techniques
Developing regularization methods to reduce harmful divergences while preserving interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regularizing interventions to reduce distributional divergence
Modifying loss function to preserve natural representation distributions
Mitigating pernicious divergences while maintaining interpretive power