Learning Self-Interpretation from Interpretability Artifacts: Training Lightweight Adapters on Vector-Label Pairs

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and hyperparameter sensitivity of existing self-explanation methods, which struggle to reliably uncover the internal reasoning mechanisms of large language models. The authors propose a novel paradigm that freezes the original model and trains only a lightweight scalar affine adapter—requiring merely \(d_{\text{model}} + 1\) parameters—on interpretability outputs. This approach achieves robust self-explanation across tasks and model families without chain-of-thought prompting or modifications to the main model. Key findings include that the bias vector accounts for 85% of performance gains, simpler adapters exhibit stronger generalization, and—critically—self-explanatory capability scales positively with model size for the first time. Experiments on a 70B-parameter model show a generation score of 71% (surpassing the label-based 63%), a 94% recall in topic identification (versus 1% for the baseline), and successful decoding of implicit bridging entities, revealing latent multi-hop reasoning paths.

Technology Category

Application Category

📝 Abstract
Self-interpretation methods prompt language models to describe their own internal states, but remain unreliable due to hyperparameter sensitivity. We show that training lightweight adapters on interpretability artifacts, while keeping the LM entirely frozen, yields reliable self-interpretation across tasks and model families. A scalar affine adapter with just $d_\text{model}+1$ parameters suffices: trained adapters generate sparse autoencoder feature labels that outperform the training labels themselves (71% vs 63% generation scoring at 70B scale), identify topics with 94% recall@1 versus 1% for untrained baselines, and decode bridge entities in multi-hop reasoning that appear in neither prompt nor response, surfacing implicit reasoning without chain-of-thought. The learned bias vector alone accounts for 85% of improvement, and simpler adapters generalize better than more expressive alternatives. Controlling for model knowledge via prompted descriptions, we find self-interpretation gains outpace capability gains from 7B to 72B parameters. Our results demonstrate that self-interpretation improves with scale, without modifying the model being interpreted.
Problem

Research questions and friction points this paper is trying to address.

self-interpretation
interpretability artifacts
language models
hyperparameter sensitivity
internal states
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-interpretation
lightweight adapters
interpretability artifacts
frozen language models
sparse autoencoder features
🔎 Similar Papers
No similar papers found.
K
Keenan Pepper
AE Studio
Alex McKenzie
Alex McKenzie
AE Studio
AI SafetyAI AlignmentMechanistic InterpretabilityAI Safety Evaluations
F
Florin Pop
AE Studio
S
Stijn Servaes
AE Studio
Martin Leitgab
Martin Leitgab
Unknown affiliation
M
Mike Vaiana
AE Studio
J
Judd Rosenblatt
AE Studio
M
Michael S. A. Graziano
Princeton Neuroscience Institute & Department of Psychology, Princeton University, Princeton, NJ
D
Diogo de Lucena
AE Studio