🤖 AI Summary
This work investigates the layer-wise evolution of predictive dynamics within Transformer hidden states. To analyze frozen pre-trained models, we propose the “tuned lens”: a lightweight, layer-specific affine probe trained to decode semantic predictions from each hidden state—yielding stable, unbiased, and high-fidelity reconstructions of model outputs. Compared to the conventional logit lens, our method significantly improves decoding accuracy and robustness across layers. Crucially, it is the first to demonstrate that latent prediction trajectories—recovered via tuned lenses—can reliably identify adversarial or malicious inputs, achieving substantially higher detection accuracy. Technically, the approach integrates iterative inference analysis, causal feature attribution, and probing of frozen models. We validate its effectiveness on autoregressive language models with up to 20B parameters. This work establishes a novel paradigm for interpreting large language models’ internal reasoning processes and provides a practical, scalable tool for mechanistic interpretability research.
📝 Abstract
We analyze transformers from the perspective of iterative inference, seeking to understand how model predictions are refined layer by layer. To do so, we train an affine probe for each block in a frozen pretrained model, making it possible to decode every hidden state into a distribution over the vocabulary. Our method, the emph{tuned lens}, is a refinement of the earlier ``logit lens'' technique, which yielded useful insights but is often brittle. We test our method on various autoregressive language models with up to 20B parameters, showing it to be more predictive, reliable and unbiased than the logit lens. With causal experiments, we show the tuned lens uses similar features to the model itself. We also find the trajectory of latent predictions can be used to detect malicious inputs with high accuracy. All code needed to reproduce our results can be found at https://github.com/AlignmentResearch/tuned-lens.