Combining Causal Models for More Accurate Abstractions of Neural Networks

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing high-level algorithmic abstractions often fail to faithfully capture the actual inference dynamics of neural networks. Method: This paper proposes a reconcilable multi-model composition framework that dynamically selects and weights multiple simple causal models to achieve fine-grained modeling of GPT-2-small’s computational states across diverse inputs. Grounded in causal abstraction theory, the approach introduces an input-driven state-switching mechanism and jointly optimizes for two objectives: intervention accuracy and input coverage—enabling controllable trade-offs between explanatory strength and fidelity. Results: Experiments on two fine-tuned toy tasks demonstrate that our method significantly outperforms single-abstraction baselines, achieving—for the first time—simultaneous improvements in both high fidelity and broad input coverage. This work establishes a novel paradigm for mechanistic interpretability of neural networks.

Technology Category

Application Category

📝 Abstract
Mechanistic interpretability aims to reverse engineer neural networks by uncovering which high-level algorithms they implement. Causal abstraction provides a precise notion of when a network implements an algorithm, i.e., a causal model of the network contains low-level features that realize the high-level variables in a causal model of the algorithm. A typical problem in practical settings is that the algorithm is not an entirely faithful abstraction of the network, meaning it only partially captures the true reasoning process of a model. We propose a solution where we combine different simple high-level models to produce a more faithful representation of the network. Through learning this combination, we can model neural networks as being in different computational states depending on the input provided, which we show is more accurate to GPT 2-small fine-tuned on two toy tasks. We observe a trade-off between the strength of an interpretability hypothesis, which we define in terms of the number of inputs explained by the high-level models, and its faithfulness, which we define as the interchange intervention accuracy. Our method allows us to modulate between the two, providing the most accurate combination of models that describe the behavior of a neural network given a faithfulness level.
Problem

Research questions and friction points this paper is trying to address.

Combining causal models for accurate neural network abstractions.
Addressing partial faithfulness in high-level algorithm representations.
Modulating interpretability hypothesis strength and faithfulness trade-offs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines simple high-level models for accuracy
Modulates interpretability and faithfulness trade-off
Models neural networks' computational states dynamically
🔎 Similar Papers
No similar papers found.