🤖 AI Summary
This paper addresses the end-to-end training convergence of discrete flow matching (DFM) generative models, establishing the first theoretical framework proving that the learned distribution converges uniformly to the true data distribution as sample size increases. Methodologically, it decomposes the distribution estimation error into a controllable chain: neural network approximation error, statistical estimation error, and total variation (TV) error of the generated distribution—leveraging Transformer-based velocity field modeling, TV-distance analysis, and statistical learning theory to quantify finite-sample convergence rates and the impact of neural network capacity. Key contributions are: (1) the first rigorous convergence guarantee for end-to-end DFM training; (2) explicit upper bounds on approximation error in terms of sample size and model capacity; and (3) the first statistical learning–theoretic foundation for discrete generative models.
📝 Abstract
We provide a theoretical analysis for end-to-end training Discrete Flow Matching (DFM) generative models. DFM is a promising discrete generative modeling framework that learns the underlying generative dynamics by training a neural network to approximate the transformative velocity field. Our analysis establishes a clear chain of guarantees by decomposing the final distribution estimation error. We first prove that the total variation distance between the generated and target distributions is controlled by the risk of the learned velocity field. We then bound this risk by analyzing its two primary sources: (i) Approximation Error, where we quantify the capacity of the Transformer architecture to represent the true velocity, and (ii) Estimation Error, where we derive statistical convergence rates that bound the error from training on a finite dataset. By composing these results, we provide the first formal proof that the distribution generated by a trained DFM model provably converges to the true data distribution as the training set size increases.