🤖 AI Summary
This work investigates how attention heads in small Transformers collaborate during counting tasks: do they operate via pseudo-ensemble redundancy or functionally differentiated division of labor? Using mechanistic interpretability analysis, attention pattern visualization, and head-level functional attribution, we find— for the first time—that heads exhibit high semantic redundancy, jointly executing identical subtasks (i.e., pseudo-ensemble behavior); yet syntactic correctness critically depends on non-uniform weighted aggregation of head outputs to satisfy grammatical constraints. This reveals a fundamental decoupling between semantic redundancy and syntactic sensitivity within the attention mechanism. Our results challenge assumptions about functional specialization in small Transformers and establish a new paradigm for analyzing their internal structure and interpretability.
📝 Abstract
We present an in-depth mechanistic interpretability analysis of training small transformers on an elementary task, counting, which is a crucial deductive step in many algorithms. In particular, we investigate the collaboration/competition among the attention heads: we ask whether the attention heads behave as a pseudo-ensemble, all solving the same subtask, or they perform different subtasks, meaning that they can only solve the original task in conjunction. Our work presents evidence that on the semantics of the counting task, attention heads behave as a pseudo-ensemble, but their outputs need to be aggregated in a non-uniform manner in order to create an encoding that conforms to the syntax. Our source code will be available upon publication.