🤖 AI Summary
This work addresses the problem of efficiently sampling from constrained posterior measures over high-dimensional continuous-time path spaces—such as stochastic paths satisfying boundary conditions or observational constraints—without access to ground-truth trajectory data. We propose the first end-to-end neural framework that integrates controlled stochastic differential equation (SDE) dynamics with infinite-dimensional Wasserstein gradient flows: a neural-parameterized control function steers the SDE evolution, while variational optimization is performed directly in the Wasserstein space to implicitly model the path density. The method is theoretically guaranteed to converge and accommodates arbitrary prior stochastic processes and likelihood structures. Empirically, it achieves significant improvements in both fidelity and computational efficiency for posterior path generation in nonlinear filtering and inverse problems.
📝 Abstract
We propose algorithms for sampling from posterior path measures $P(C([0, T], mathbb{R}^d))$ under a general prior process. This leverages ideas from (1) controlled equilibrium dynamics, which gradually transport between two path measures, and (2) optimization in $infty$-dimensional probability space endowed with a Wasserstein metric, which can be used to evolve a density curve under the specified likelihood. The resulting algorithms are theoretically grounded and can be integrated seamlessly with neural networks for learning the target trajectory ensembles, without access to data.