🤖 AI Summary
This work proposes a training-free generative modeling framework that circumvents the high computational cost and compositional rigidity of conventional large-scale neural network approaches. By leveraging kernel methods and stochastic differential equations (SDEs), the method constructs a P-dimensional linear system to directly solve for the SDE drift term and employs Girsanov’s theorem to design a stable diffusion coefficient, effectively mitigating the divergence issue at initial time steps. The framework flexibly incorporates mappings such as scattering transforms or pretrained features without requiring explicit training. It demonstrates strong empirical performance across diverse domains—including financial time series, turbulent flow data, and image generation—producing high-quality samples while maintaining theoretical rigor and computational efficiency.
📝 Abstract
We develop a kernel method for generative modeling within the stochastic interpolant framework, replacing neural network training with linear systems. The drift of the generative SDE is $\hat b_t(x) = \nablaφ(x)^\topη_t$, where $η_t\in\R^P$ solves a $P\times P$ system computable from data, with $P$ independent of the data dimension $d$. Since estimates are inexact, the diffusion coefficient $D_t$ affects sample quality; the optimal $D_t^*$ from Girsanov diverges at $t=0$, but this poses no difficulty and we develop an integrator that handles it seamlessly. The framework accommodates diverse feature maps -- scattering transforms, pretrained generative models etc. -- enabling training-free generation and model combination. We demonstrate the approach on financial time series, turbulence, and image generation.