🤖 AI Summary
Existing numerical solvers for stochastic differential equations (SDEs) suffer from computationally expensive drift/diffusion evaluations, reliance on Gaussian sampling, sensitivity to quantization errors, and instability with non-Lipschitz drift terms. To address these challenges, this paper proposes a lattice-based random walk discretization method. It replaces continuous drift and diffusion dynamics with 1–2-bit stochastic operations—using binary or ternary increment sampling—thereby eliminating floating-point arithmetic and Gaussian sampling entirely and enabling native compatibility with bitstream probabilistic computing architectures. Theoretically, the method achieves first-order weak convergence, exhibits robustness to quantization errors, and remains stable under non-Lipschitz drift conditions. Empirical evaluation demonstrates its effectiveness on canonical SDEs and state-of-the-art diffusion models, while delivering substantial hardware efficiency gains and computational speedup.
📝 Abstract
We introduce a lattice random walk discretisation scheme for stochastic differential equations (SDEs) that samples binary or ternary increments at each step, suppressing complex drift and diffusion computations to simple 1 or 2 bit random values. This approach is a significant departure from traditional floating point discretisations and offers several advantages; including compatibility with stochastic computing architectures that avoid floating-point arithmetic in place of directly manipulating the underlying probability distribution of a bitstream, elimination of Gaussian sampling requirements, robustness to quantisation errors, and handling of non-Lipschitz drifts. We prove weak convergence and demonstrate the advantages through experiments on various SDEs, including state-of-the-art diffusion models.