Training Language Models to Explain Their Own Computations

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether language models (LMs) can leverage “privileged access” to their internal computations to generate accurate, generalizable natural language explanations. Method: We introduce *self-explanation*—a novel paradigm wherein high-quality explanatory annotations are automatically generated via interpretability techniques (e.g., feature attribution, causal mediation analysis), and a pretrained LM is fine-tuned on only tens of thousands of such examples to produce explanations of feature encoding, activation-level causal structure, and input influence. Contribution/Results: Experiments demonstrate that self-explaining LMs significantly outperform strong external explainer models and generalize robustly to unseen queries with minimal training. Crucially, this is the first systematic empirical validation that privileged access to internal states yields substantial explanatory value—enabling scalable, low-cost model interpretation without requiring architectural modification or expensive human annotation.

Technology Category

Application Category

📝 Abstract
Can language models (LMs) learn to faithfully describe their internal computations? Are they better able to describe themselves than other models? We study the extent to which LMs'privileged access to their own internals can be leveraged to produce new techniques for explaining their behavior. Using existing interpretability techniques as a source of ground truth, we fine-tune LMs to generate natural language descriptions of (1) the information encoded by LM features, (2) the causal structure of LMs'internal activations, and (3) the influence of specific input tokens on LM outputs. When trained with only tens of thousands of example explanations, explainer models exhibit non-trivial generalization to new queries. This generalization appears partly attributable to explainer models'privileged access to their own internals: using a model to explain its own computations generally works better than using a *different* model to explain its computations (even if the other model is significantly more capable). Our results suggest not only that LMs can learn to reliably explain their internal computations, but that such explanations offer a scalable complement to existing interpretability methods.
Problem

Research questions and friction points this paper is trying to address.

Training language models to generate self-explanations of their internal computations
Comparing self-explanation effectiveness versus external model explanations
Developing scalable interpretability methods using models' privileged internal access
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning LMs to describe their internal features
Generating natural language explanations of causal activations
Using self-explanations as scalable interpretability complement