🤖 AI Summary
Transformer-based large language models exhibit significant limitations in length generalization—i.e., extrapolating to sequences longer than those seen during training. This work addresses the problem in computationally grounded reasoning tasks and proposes TAIL, a Turing Machine imitation learning framework. TAIL models Turing machine behavior—including read/write head movement and state transitions—via atomic state decomposition and explicit memory access mechanisms; it synthesizes chain-of-thought data via program synthesis to linearly scale reasoning steps, and employs attention to model memory operations. Crucially, TAIL relies solely on synthetic data. When applied to Qwen2.5-7B, it substantially improves generalization performance across multiple long-sequence reasoning benchmarks, consistently outperforming prior methods and DeepSeek-R1. These results validate the Turing Machine abstraction as a principled guide for length generalization and demonstrate the broad applicability and effectiveness of the proposed approach.
📝 Abstract
Length generalization, the ability to solve problems of longer sequences than those observed during training, poses a core challenge of Transformer-based large language models (LLM). Although existing studies have predominantly focused on data-driven approaches for arithmetic operations and symbolic manipulation tasks, these approaches tend to be task-specific with limited overall performance. To pursue a more general solution, this paper focuses on a broader case of reasoning problems that are computable, i.e., problems that algorithms can solve, thus can be solved by the Turing Machine. From this perspective, this paper proposes Turing MAchine Imitation Learning (TAIL) to improve the length generalization ability of LLMs. TAIL synthesizes chain-of-thoughts (CoT) data that imitate the execution process of a Turing Machine by computer programs, which linearly expands the reasoning steps into atomic states to alleviate shortcut learning and explicit memory fetch mechanism to reduce the difficulties of dynamic and long-range data access in elementary operations. To validate the reliability and universality of TAIL, we construct a challenging synthetic dataset covering 8 classes of algorithms and 18 tasks. Without bells and whistles, TAIL significantly improves the length generalization ability as well as the performance of Qwen2.5-7B on various tasks using only synthetic data, surpassing previous methods and DeepSeek-R1. The experimental results reveal that the key concepts in the Turing Machine, instead of the thinking styles, are indispensable for TAIL for length generalization, through which the model exhibits read-and-write behaviors consistent with the properties of the Turing Machine in their attention layers. This work provides a promising direction for future research in the learning of LLM reasoning from synthetic data.