An Optimal Control Approach To Transformer Training

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of global optimality guarantees in Transformer training, its reliance on gradient-based methods, and the neglect of structural constraints by rigorously introducing optimal control theory into the training framework. The Transformer is modeled as a discrete-time controlled particle system with shared actions, and by lifting the problem to the space of probability measures, a fully observable Markov decision process is constructed. Combining dynamic programming with triple quantization—over state, measure, and action spaces—the approach achieves globally optimal and robust training without requiring smoothness or convexity assumptions. Theoretical contributions include proving the existence of a globally optimal policy, establishing the equivalence between closed-loop policies and input-independent open-loop policies, and demonstrating stability with respect to perturbations in the initial empirical measure as well as convergence of the policy as data volume increases.

Technology Category

Application Category

📝 Abstract
In this paper, we develop a rigorous optimal control-theoretic approach to Transformer training that respects key structural constraints such as (i) realized-input-independence during execution, (ii) the ensemble control nature of the problem, and (iii) positional dependence. We model the Transformer architecture as a discrete-time controlled particle system with shared actions, exhibiting noise-free McKean-Vlasov dynamics. While the resulting dynamics is not Markovian, we show that lifting it to probability measures produces a fully-observed Markov decision process (MDP). Positional encodings are incorporated into the state space to preserve the sequence order under lifting. Using the dynamic programming principle, we establish the existence of globally optimal policies under mild assumptions of compactness. We further prove that closed-loop policies in the lifted is equivalent to an initial-distribution dependent open-loop policy, which are realized-input-independent and compatible with standard Transformer training. To train a Transformer, we propose a triply quantized training procedure for the lifted MDP by quantizing the state space, the space of probability measures, and the action space, and show that any optimal policy for the triply quantized model is near-optimal for the original training problem. Finally, we establish stability and empirical consistency properties of the lifted model by showing that the value function is continuous with respect to the perturbations of the initial empirical measures and convergence of policies as the data size increases. This approach provides a globally optimal and robust alternative to gradient-based training without requiring smoothness or convexity.
Problem

Research questions and friction points this paper is trying to address.

Optimal Control
Transformer Training
Markov Decision Process
Positional Encoding
Global Optimality
Innovation

Methods, ideas, or system contributions that make the work stand out.

optimal control
Markov decision process
McKean-Vlasov dynamics
triply quantized training
positional encoding
🔎 Similar Papers
No similar papers found.
K
Kağan Akman
Department of Mathematics, Bilkent University, Ankara, 06800, Turkey
N
Naci Saldı
Department of Mathematics, Bilkent University, Ankara, 06800, Turkey
Serdar Yüksel
Serdar Yüksel
Queen's University
stochastic control theoryrobustness and learninginformation theorystochastic dynamical systems