🤖 AI Summary
This study addresses the feasibility of efficient sampling from the solution space of nonconvex perceptron models. We propose a diffusion-based sampling framework that integrates Algorithmic Stochastic Localization (ASL) with Approximate Message Passing (AMP), and establish, for the first time, an analytical characterization of the thermodynamic limit via replica theory, yielding a universal sampling feasibility criterion. Theoretically, we prove that in the entire replica-symmetric phase of the spherical perceptron, the negative stability condition ensures efficient approximate uniform sampling; in contrast, the binary perceptron is provably intractable due to an intrinsic overlap gap that prevents any efficient sampling algorithm. Furthermore, we construct a class of tractable surrogate measures, providing a novel paradigm and theoretical benchmark for sampling from nonconvex high-dimensional distributions.
📝 Abstract
We analyze the problem of sampling from the solution space of simple yet non-convex neural network models by employing a denoising diffusion process known as Algorithmic Stochastic Localization, where the score function is provided by Approximate Message Passing. We introduce a formalism based on the replica method to characterize the process in the infinite-size limit in terms of a few order parameters, and, in particular, we provide criteria for the feasibility of sampling. We show that, in the case of the spherical perceptron problem with negative stability, approximate uniform sampling is achievable across the entire replica symmetric region of the phase diagram. In contrast, for the binary perceptron, uniform sampling via diffusion invariably fails due to the overlap gap property exhibited by the typical set of solutions. We discuss the first steps in defining alternative measures that can be efficiently sampled.