How Transformers Learn Regular Language Recognition: A Theoretical Study on Training Dynamics and Implicit Bias

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates how a single-layer Transformer—comprising one self-attention layer and one linear classifier—learns two canonical regular languages via gradient descent: the “even-pairs” and “parity” tasks. Methodologically, it employs gradient flow analysis, generalization bounds, and 1D sequence experiments. Theoretically, it establishes a rigorous two-phase training dynamic: the attention layer rapidly separates input representations in feature space, while the linear layer slowly converges to the maximum-margin hyperplane at rate O(1/t). This work provides the first formal proof of such a two-phase dynamic in Transformer training. Crucially, it reveals that parity learning necessitates implicit chain-of-thought (CoT) reasoning, arising from structured inductive bias induced by joint optimization of attention and classification layers. Empirical results corroborate the theoretical predictions and demonstrate that CoT is necessary for generalization on parity.

Technology Category

Application Category

📝 Abstract
Language recognition tasks are fundamental in natural language processing (NLP) and have been widely used to benchmark the performance of large language models (LLMs). These tasks also play a crucial role in explaining the working mechanisms of transformers. In this work, we focus on two representative tasks in the category of regular language recognition, known as `even pairs' and `parity check', the aim of which is to determine whether the occurrences of certain subsequences in a given sequence are even. Our goal is to explore how a one-layer transformer, consisting of an attention layer followed by a linear layer, learns to solve these tasks by theoretically analyzing its training dynamics under gradient descent. While even pairs can be solved directly by a one-layer transformer, parity check need to be solved by integrating Chain-of-Thought (CoT), either into the inference stage of a transformer well-trained for the even pairs task, or into the training of a one-layer transformer. For both problems, our analysis shows that the joint training of attention and linear layers exhibits two distinct phases. In the first phase, the attention layer grows rapidly, mapping data sequences into separable vectors. In the second phase, the attention layer becomes stable, while the linear layer grows logarithmically and approaches in direction to a max-margin hyperplane that correctly separates the attention layer outputs into positive and negative samples, and the loss decreases at a rate of $O(1/t)$. Our experiments validate those theoretical results.
Problem

Research questions and friction points this paper is trying to address.

How transformers learn regular language recognition tasks
Training dynamics of one-layer transformers on even pairs and parity check
Implicit bias and gradient descent in transformer learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-layer transformer analyzes training dynamics theoretically
Chain-of-Thought integration solves parity check tasks
Attention and linear layers exhibit distinct training phases
🔎 Similar Papers
No similar papers found.
Ruiquan Huang
Ruiquan Huang
Penn State University
machine learning
Y
Yingbin Liang
Department Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210, USA
J
Jing Yang
Department of Computer Science, University of Virginia, Charlottesville, VA 22904, USA