π€ AI Summary
This work addresses the challenge of implementing online training for score-based generative models (SGMs) on physical hardware. Methodologically, it introduces a local learning framework grounded in nonequilibrium driving protocols: a network of driven, nonlinear overdamped oscillators is coupled to a thermal bath to enable physically realizable sampling; a local learning rule is designed that estimates gradients solely from measurements of local forces or dynamical responses, thereby circumventing global backpropagation. The key contribution lies in embedding SGM training within a nonequilibrium statistical physics framework, enabling hardware-efficient, online optimization. Experiments demonstrate efficacy in sampling from a 2D Gaussian mixture distribution and generating MNIST β0/1β digits using a 10Γ10 oscillator network. This approach establishes a novel paradigm for physics-inspired generative modeling, bridging statistical learning with analog physical implementation.
π Abstract
We show that the out-of-equilibrium driving protocol of score-based generative models (SGMs) can be learned via a local learning rule. The gradient with respect to the parameters of the driving protocol are computed directly from force measurements or from observed system dynamics. As a demonstration, we implement an SGM in a network of driven, nonlinear, overdamped oscillators coupled to a thermal bath. We first apply it to the problem of sampling from a mixture of two Gaussians in 2D. Finally, we train a network of 10x10 oscillators to sample images of 0s and 1s from the MNIST dataset.