Extrapolation by Association: Length Generalization Transfer in Transformers

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformer language models exhibit poor length generalization—i.e., degraded performance on input sequences substantially longer than those seen during training—and this capability rarely transfers across tasks. Method: We propose a multi-task joint training framework that incorporates auxiliary algorithmic tasks with explicit structural regularities—namely, long-sequence arithmetic, string transformation, and maze navigation—to explicitly scaffold length extrapolation. Contribution/Results: We demonstrate, for the first time, that length generalization transfers effectively between semantically or structurally similar tasks. We further show that pretrained language models encode reusable “computational skeletons” supporting downstream length extrapolation. Through attention head analysis and controlled algorithmic benchmarks, our approach achieves significant improvements in length extrapolation across diverse tasks; critically, the degree of cross-task attention head reuse strongly correlates with the strength of generalization transfer.

Technology Category

Application Category

📝 Abstract
Transformer language models have demonstrated impressive generalization capabilities in natural language domains, yet we lack a fine-grained understanding of how such generalization arises. In this paper, we investigate length generalization--the ability to extrapolate from shorter to longer inputs--through the lens of extit{task association}. We find that length generalization can be extit{transferred} across related tasks. That is, training a model with a longer and related auxiliary task can lead it to generalize to unseen and longer inputs from some other target task. We demonstrate this length generalization transfer across diverse algorithmic tasks, including arithmetic operations, string transformations, and maze navigation. Our results show that transformer models can inherit generalization capabilities from similar tasks when trained jointly. Moreover, we observe similar transfer effects in pretrained language models, suggesting that pretraining equips models with reusable computational scaffolding that facilitates extrapolation in downstream settings. Finally, we provide initial mechanistic evidence that length generalization transfer correlates with the re-use of the same attention heads between the tasks. Together, our findings deepen our understanding of how transformers generalize to out-of-distribution inputs and highlight the compositional reuse of inductive structure across tasks.
Problem

Research questions and friction points this paper is trying to address.

Understanding how transformers generalize to longer inputs
Investigating transfer of length generalization across tasks
Exploring reuse of attention heads for extrapolation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transfer length generalization across related tasks
Joint training enables inheritance of generalization
Reuse attention heads for task association
🔎 Similar Papers
No similar papers found.