🤖 AI Summary
This work addresses the instability and hyperparameter sensitivity of existing self-explanation methods, which struggle to reliably uncover the internal reasoning mechanisms of large language models. The authors propose a novel paradigm that freezes the original model and trains only a lightweight scalar affine adapter—requiring merely \(d_{\text{model}} + 1\) parameters—on interpretability outputs. This approach achieves robust self-explanation across tasks and model families without chain-of-thought prompting or modifications to the main model. Key findings include that the bias vector accounts for 85% of performance gains, simpler adapters exhibit stronger generalization, and—critically—self-explanatory capability scales positively with model size for the first time. Experiments on a 70B-parameter model show a generation score of 71% (surpassing the label-based 63%), a 94% recall in topic identification (versus 1% for the baseline), and successful decoding of implicit bridging entities, revealing latent multi-hop reasoning paths.
📝 Abstract
Self-interpretation methods prompt language models to describe their own internal states, but remain unreliable due to hyperparameter sensitivity. We show that training lightweight adapters on interpretability artifacts, while keeping the LM entirely frozen, yields reliable self-interpretation across tasks and model families. A scalar affine adapter with just $d_\text{model}+1$ parameters suffices: trained adapters generate sparse autoencoder feature labels that outperform the training labels themselves (71% vs 63% generation scoring at 70B scale), identify topics with 94% recall@1 versus 1% for untrained baselines, and decode bridge entities in multi-hop reasoning that appear in neither prompt nor response, surfacing implicit reasoning without chain-of-thought. The learned bias vector alone accounts for 85% of improvement, and simpler adapters generalize better than more expressive alternatives. Controlling for model knowledge via prompted descriptions, we find self-interpretation gains outpace capability gains from 7B to 72B parameters. Our results demonstrate that self-interpretation improves with scale, without modifying the model being interpreted.