🤖 AI Summary
This work addresses the problem of efficiently learning diffusion processes with a prescribed stationary distribution by estimating the parameters of a stochastic differential equation (SDE) such that its stationary distribution matches a target distribution. To this end, the authors propose the Stein Kernelized Stationarity Discrepancy (SKDS), which characterizes stationarity through the expected action of the diffusion generator in a reproducing kernel Hilbert space. Theoretically, SKDS vanishes if and only if the learned diffusion process admits the target as its stationary distribution, and it exhibits convexity under broad parameterizations along with high-probability ε-quasiconvexity for its empirical version. By integrating Stein discrepancies, kernel methods, and SDE modeling, the approach achieves accuracy comparable to existing methods while substantially reducing computational cost, outperforming most baseline approaches.
📝 Abstract
Learning a stationary diffusion amounts to estimating the parameters of a stochastic differential equation whose stationary distribution matches a target distribution. We build on the recently introduced kernel deviation from stationarity (KDS), which enforces stationarity by evaluating expectations of the diffusion's generator in a reproducing kernel Hilbert space. Leveraging the connection between KDS and Stein discrepancies, we introduce the Stein-type KDS (SKDS) as an alternative formulation. We prove that a vanishing SKDS guarantees alignment of the learned diffusion's stationary distribution with the target. Furthermore, under broad parametrizations, SKDS is convex with an empirical version that is $\epsilon$-quasiconvex with high probability. Empirically, learning with SKDS attains comparable accuracy to KDS while substantially reducing computational cost and yields improvements over the majority of competitive baselines.