Dynamical Properties of Tokens in Self-Attention and Effects of Positional Encoding

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the dynamic evolution of token representations in pretrained Transformers, focusing on how positional encoding schemes—absolute versus rotary—affect their continuous-time dynamical behavior. We propose a modeling framework grounded in nonlinear dynamical systems theory, rigorously deriving necessary and sufficient conditions for token representations to converge to zero or diverge. Our analysis is the first to systematically demonstrate that rotary positional encoding suppresses pathological convergence and enhances representation separation. Both theoretical analysis and empirical experiments confirm that excessive token convergence degrades model expressivity. Guided by these insights, we design lightweight architectural enhancements—such as dynamic attention scaling—that effectively mitigate convergence issues and yield consistent performance gains across multiple benchmarks. This work establishes a novel dynamical-systems perspective for understanding Transformer internals and provides interpretable, theory-backed principles for architecture optimization.

Technology Category

Application Category

📝 Abstract
This paper investigates the dynamical properties of tokens in pre-trained Transformer models and explores their application to improving Transformers. To this end, we analyze the dynamical system governing the continuous-time limit of the pre-trained model and characterize the asymptotic behavior of its solutions. Specifically, we characterize when tokens move closer to or farther from one another over time, depending on the model parameters. We provide sufficient conditions, based on these parameters, to identify scenarios where tokens either converge to zero or diverge to infinity. Unlike prior works, our conditions are broader in scope and more applicable to real-world models. Furthermore, we investigate how different forms of positional encoding -- specifically absolute and rotary -- affect these dynamical regimes. Empirical evidence reveals that the convergence scenario adversely impacts model performance. Motivated by these insights, we propose simple refinements to Transformer architectures that mitigate convergence behavior in models with absolute or rotary positional encoding. These findings support theoretical foundations and design principles for improving Transformer models.
Problem

Research questions and friction points this paper is trying to address.

Analyzes token dynamics in Transformers via continuous-time limit
Characterizes token convergence or divergence based on model parameters
Investigates positional encoding effects and proposes architectural refinements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes token dynamics in continuous-time Transformer limits
Characterizes token convergence/divergence based on model parameters
Proposes architectural refinements to mitigate harmful convergence effects
🔎 Similar Papers
No similar papers found.
Duy-Tung Pham
Duy-Tung Pham
FPT Software AI Center, Vietnam
Machine LearningDeep LearningTopic ModelingProbabilistic Model
A
An The Nguyen
FPT Software AI Center, Hanoi, Vietnam
Viet-Hoang Tran
Viet-Hoang Tran
National University of Singapore
Machine Learning
N
Nhan-Phu Chung
Ho Chi Minh University of Economics, Ho Chi Minh City, Vietnam
Xin T. Tong
Xin T. Tong
National University of Singapore
Data assimilationUncertainty quantificationApplied probability
T
Tan M. Nguyen
Department of Mathematics, National University of Singapore
T
Thieu Vo
Department of Mathematics, National University of Singapore