🤖 AI Summary
This work addresses the Schrödinger bridge problem under the practical setting where only samples from the endpoint distributions are available. The authors reformulate the Schrödinger system in terms of a positive transform potential function that satisfies a nonlinear fixed-point equation. They propose a novel approach that directly learns this potential function from a given function class via empirical risk minimization, leveraging a stochastic control representation to generate samples from the bridge process. This method circumvents conventional reliance on Sinkhorn iterations and kernel-smoothed dual differentials. Under a sub-Gaussian assumption, the authors establish uniform convergence of the empirical risk and demonstrate the efficacy and scalability of their approach through numerical experiments.
📝 Abstract
We study the Schr\"odinger bridge problem when the endpoint distributions are available only through samples. Classical computational approaches estimate Schr\"odinger potentials via Sinkhorn iterations on empirical measures and then construct a time-inhomogeneous drift by differentiating a kernel-smoothed dual solution. In contrast, we propose a learning-theoretic route: we rewrite the Schr\"odinger system in terms of a single positive transformed potential that satisfies a nonlinear fixed-point equation and estimate this potential by empirical risk minimization over a function class. We establish uniform concentration of the empirical risk around its population counterpart under sub-Gaussian assumptions on the reference kernel and terminal density. We plug the learned potential into a stochastic control representation of the bridge to generate samples. We illustrate performance of the suggested approach with numerical experiments.