🤖 AI Summary
To address the high computational cost and training instability of path discriminators in Neural Stochastic Differential Equation (SDE)-based generative modeling, this paper proposes a lightweight continuous-time discriminator built upon the Hermite function basis, establishing the first Hermite-guided adversarial training framework tailored to SDE-induced path distributions. Theoretically, we provide the first universal approximation and convergence guarantees for this discriminator with respect to SDE path measures. Methodologically, we explicitly capture temporal dependencies in sample paths via Hermite series expansions, drastically reducing both parameter count and computational overhead. Experiments on synthetic and real-world datasets demonstrate substantial improvements in sample quality, enhanced training stability, and over 40% acceleration in inference speed—outperforming state-of-the-art SDE-based generative methods in overall performance.
📝 Abstract
Neural Stochastic Differential Equations (Neural SDEs) provide a principled framework for modeling continuous-time stochastic processes and have been widely adopted in fields ranging from physics to finance. Recent advances suggest that Generative Adversarial Networks (GANs) offer a promising solution to learning the complex path distributions induced by SDEs. However, a critical bottleneck lies in designing a discriminator that faithfully captures temporal dependencies while remaining computationally efficient. Prior works have explored Neural Controlled Differential Equations (CDEs) as discriminators due to their ability to model continuous-time dynamics, but such architectures suffer from high computational costs and exacerbate the instability of adversarial training. To address these limitations, we introduce HGAN-SDEs, a novel GAN-based framework that leverages Neural Hermite functions to construct a structured and efficient discriminator. Hermite functions provide an expressive yet lightweight basis for approximating path-level dynamics, enabling both reduced runtime complexity and improved training stability. We establish the universal approximation property of our framework for a broad class of SDE-driven distributions and theoretically characterize its convergence behavior. Extensive empirical evaluations on synthetic and real-world systems demonstrate that HGAN-SDEs achieve superior sample quality and learning efficiency compared to existing generative models for SDEs