🤖 AI Summary
To address the scalability bottleneck in latent stochastic differential equation (SDE) training—stemming from reliance on numerical simulation and adjoint-based sensitivity backpropagation—this paper introduces SDE Matching, a novel training paradigm. It is the first to adapt Score/Flow Matching principles to stochastic dynamical modeling, enabling fully simulation-free learning. Our method formulates distribution matching over stochastic trajectories in the latent space, parameterizes the dynamics via neural SDEs, and employs implicit gradient estimation to bypass explicit SDE solving and adjoint integration. Experiments on multiple time-series benchmarks demonstrate that SDE Matching achieves modeling accuracy comparable to the adjoint method while accelerating training by 3–5× and substantially reducing memory consumption. Moreover, it natively supports long sequences and high-dimensional latent representations, overcoming key limitations of conventional approaches.
📝 Abstract
The Latent Stochastic Differential Equation (SDE) is a powerful tool for time series and sequence modeling. However, training Latent SDEs typically relies on adjoint sensitivity methods, which depend on simulation and backpropagation through approximate SDE solutions, which limit scalability. In this work, we propose SDE Matching, a new simulation-free method for training Latent SDEs. Inspired by modern Score- and Flow Matching algorithms for learning generative dynamics, we extend these ideas to the domain of stochastic dynamics for time series and sequence modeling, eliminating the need for costly numerical simulations. Our results demonstrate that SDE Matching achieves performance comparable to adjoint sensitivity methods while drastically reducing computational complexity.