Grokking ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing model interpretation methods typically analyze parameters, data, or training trajectories in isolation, failing to capture their interdependent interactions. To address this, we propose ExPLAIND—a unified attribution framework that, for the first time, integrates parameter-, data-, and optimization-trajectory-level attribution within a single theoretical framework. Leveraging a generalized gradient-path kernel, ExPLAIND reconstructs training dynamics to enable joint, three-dimensional interpretability. Theoretically, it establishes the first complete “unified tripartite” attribution model and derives closed-form parameter-wise and step-wise influence scores. Mechanistically, it reveals an alignment phenomenon between embedding and output layers post-representation pipeline formation in grokking, refining and correcting the canonical three-phase theory. Empirically, ExPLAIND faithfully reproduces training behaviors on CNNs and Transformers, and its parameter pruning performance matches state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Post-hoc interpretability methods typically attribute a model's behavior to its components, data, or training trajectory in isolation. This leads to explanations that lack a unified view and may miss key interactions. While combining existing methods or applying them at different training stages offers broader insights, these approaches usually lack theoretical support. In this work, we present ExPLAIND, a unified framework that integrates all three perspectives. First, we generalize recent work on gradient path kernels, which reformulate models trained by gradient descent as a kernel machine, to more realistic training settings. Empirically, we find that both a CNN and a Transformer model are replicated accurately by this reformulation. Second, we derive novel parameter- and step-wise influence scores from the kernel feature maps. We show their effectiveness in parameter pruning that is comparable to existing methods, reinforcing their value for model component attribution. Finally, jointly interpreting model components and data over the training process, we leverage ExPLAIND to analyze a Transformer that exhibits Grokking. Among other things, our findings support previously proposed stages of Grokking, while refining the final phase as one of alignment of input embeddings and final layers around a representation pipeline learned after the memorization phase. Overall, ExPLAIND provides a theoretically grounded, unified framework to interpret model behavior and training dynamics.
Problem

Research questions and friction points this paper is trying to address.

Unifies model, data, and training attribution for behavior analysis
Derives parameter- and step-wise influence scores from kernels
Analyzes Transformer grokking stages and refines final phase
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework integrating model, data, training
Generalized gradient path kernels for realistic training
Novel parameter- and step-wise influence scores
🔎 Similar Papers
No similar papers found.