🤖 AI Summary
Score-based generative models lack an explicit energy function, hindering their integration into the Metropolis-Hastings (MH) sampling framework. To address this, we propose a differentiable MH sampling mechanism. Our core method establishes, for the first time, a differentiable mapping between the score function and the MH acceptance probability. Leveraging detailed balance, we design a novel loss function that explicitly enforces equilibrium conditions, enabling the learned score to directly yield a differentiable, valid acceptance probability. This approach transcends conventional SDE-based sampling paradigms: it guarantees MH convergence without requiring explicit energy modeling and naturally accommodates non-Gaussian distributions—including heavy-tailed ones. Experiments demonstrate substantial improvements in sample quality and convergence stability across MH variants, including MALA, validating both theoretical soundness and practical efficacy.
📝 Abstract
In this paper, we introduce a new approach for integrating score-based models with the Metropolis-Hastings algorithm. While traditional score-based diffusion models excel in accurately learning the score function from data points, they lack an energy function, making the Metropolis-Hastings adjustment step inaccessible. Consequently, the unadjusted Langevin algorithm is often used for sampling using estimated score functions. The lack of an energy function then prevents the application of the Metropolis-adjusted Langevin algorithm and other Metropolis-Hastings methods, limiting the wealth of other algorithms developed that use acceptance functions. We address this limitation by introducing a new loss function based on the emph{detailed balance condition}, allowing the estimation of the Metropolis-Hastings acceptance probabilities given a learned score function. We demonstrate the effectiveness of the proposed method for various scenarios, including sampling from heavy-tail distributions.