π€ AI Summary
This work proposes a general method to automatically decompile concise and interpretable RASP programs from Transformer models that exhibit strong performance on length generalization tasks, thereby verifying whether these models truly implement generalizable algorithmic logic. By integrating Transformer reparameterization, causal intervention analysis, and RASP program synthesis with simplification techniques, the approach identifies minimal subprograms sufficient to explain model behavior. Applied across multiple algorithmic and formal language tasks, the method successfully recovers simple RASP programs whose execution matches the modelsβ predictions, offering the first direct and interpretable evidence of the computational mechanisms internally implemented by Transformers.
π Abstract
Recent work has shown that the computations of Transformers can be simulated in the RASP family of programming languages. These findings have enabled improved understanding of the expressive capacity and generalization abilities of Transformers. In particular, Transformers have been suggested to length-generalize exactly on problems that have simple RASP programs. However, it remains open whether trained models actually implement simple interpretable programs. In this paper, we present a general method to extract such programs from trained Transformers. The idea is to faithfully re-parameterize a Transformer as a RASP program and then apply causal interventions to discover a small sufficient sub-program. In experiments on small Transformers trained on algorithmic and formal language tasks, we show that our method often recovers simple and interpretable RASP programs from length-generalizing transformers. Our results provide the most direct evidence so far that Transformers internally implement simple RASP programs.