π€ AI Summary
This work addresses optimization of time-varying generative models on exponential family manifolds, establishing a geometric unification framework for generative training and probabilistic modeling. Methodologically, it projects model evolution onto the exponential family manifold and employs natural gradient descent for parameter updates; it is the first to embed generative training intrinsically within this manifoldβs geometric structure. We propose an efficient, MCMC-free approximation of the natural gradient of the KL divergence, and design a closed-form particle update algorithm applicable to arbitrary exponential family models. Experiments on synthetic and real-world benchmarks demonstrate that the proposed approach achieves faster convergence, improved estimation stability, and significantly enhanced generative quality and training efficiency compared to baseline methods.
π Abstract
Optimising probabilistic models is a well-studied field in statistics. However, its connection with the training of generative models remains largely under-explored. In this paper, we show that the evolution of time-varying generative models can be projected onto an exponential family manifold, naturally creating a link between the parameters of a generative model and those of a probabilistic model. We then train the generative model by moving its projection on the manifold according to the natural gradient descent scheme. This approach also allows us to approximate the natural gradient of the KL divergence efficiently without relying on MCMC for intractable models. Furthermore, we propose particle versions of the algorithm, which feature closed-form update rules for any parametric model within the exponential family. Through toy and real-world experiments, we validate the effectiveness of the proposed algorithms.