GIM: Improved Interpretability for Large Language Models

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Softmax normalization in large language model (LLM) attention mechanisms induces a novel “self-repair” phenomenon: when the signal of a critical component is attenuated, remaining components compensate via attention redistribution, causing conventional pruning- or gradient-based attribution methods to systematically underestimate component importance. This work is the first to identify and formalize this mechanism. We propose Gradient Interaction Modeling (GIM), a backward-pass technique that explicitly captures inter-head gradient coupling during backpropagation to correct attribution bias. GIM requires no forward-pass architectural modification and is compatible with mainstream LLMs—including Gemma, LLaMA, and Qwen—across diverse tasks. Empirical evaluation demonstrates that GIM consistently outperforms state-of-the-art interpretability methods, achieving an average 23.6% improvement in attribution fidelity. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Ensuring faithful interpretability in large language models is imperative for trustworthy and reliable AI. A key obstacle is self-repair, a phenomenon where networks compensate for reduced signal in one component by amplifying others, masking the true importance of the ablated component. While prior work attributes self-repair to layer normalization and back-up components that compensate for ablated components, we identify a novel form occurring within the attention mechanism, where softmax redistribution conceals the influence of important attention scores. This leads traditional ablation and gradient-based methods to underestimate the significance of all components contributing to these attention scores. We introduce Gradient Interaction Modifications (GIM), a technique that accounts for self-repair during backpropagation. Extensive experiments across multiple large language models (Gemma 2B/9B, LLAMA 1B/3B/8B, Qwen 1.5B/3B) and diverse tasks demonstrate that GIM significantly improves faithfulness over existing circuit identification and feature attribution methods. Our work is a significant step toward better understanding the inner mechanisms of LLMs, which is crucial for improving them and ensuring their safety. Our code is available at https://github.com/JoakimEdin/gim.
Problem

Research questions and friction points this paper is trying to address.

Addresses self-repair in LLM attention mechanisms masking component importance
Improves interpretability by accounting for self-repair during backpropagation
Enhances faithfulness of circuit identification and feature attribution methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies self-repair within attention mechanisms
Introduces Gradient Interaction Modifications (GIM)
Improves faithfulness in circuit identification methods
🔎 Similar Papers
No similar papers found.