🤖 AI Summary
Transformers exhibit limited generalization under sequence length extrapolation, and existing theoretical explanations remain inadequate.
Method: We establish the first rigorous formal framework characterizing the class of functions recognizable by causal Transformers with learnable absolute positional encodings, along with precise length extrapolation bounds. Our approach integrates formal language theory, function recognizability analysis, norm-regularized idealized models, and explicit positional encoding modeling, complemented by theory-guided empirical validation.
Contributions/Results: (1) We derive the first provably sound theoretical criterion for length generalization; (2) we uncover fundamental connections between positional encoding structure and learnability of target functions; and (3) we achieve *a priori* prediction—confirmed empirically—of generalization success or failure on canonical tasks including bracket matching, counting, and sequence copying. Our theory explains well-known empirical phenomena and provides provably grounded design principles for length-generalizable architectures.
📝 Abstract
A major challenge for transformers is generalizing to sequences longer than those observed during training. While previous works have empirically shown that transformers can either succeed or fail at length generalization depending on the task, theoretical understanding of this phenomenon remains limited. In this work, we introduce a rigorous theoretical framework to analyze length generalization in causal transformers with learnable absolute positional encodings. In particular, we characterize those functions that are identifiable in the limit from sufficiently long inputs with absolute positional encodings under an idealized inference scheme using a norm-based regularizer. This enables us to prove the possibility of length generalization for a rich family of problems. We experimentally validate the theory as a predictor of success and failure of length generalization across a range of algorithmic and formal language tasks. Our theory not only explains a broad set of empirical observations but also opens the way to provably predicting length generalization capabilities in transformers.