🤖 AI Summary
This work investigates whether stable algorithms can reliably locate strongly isolated Boolean solutions—those at Hamming distance Ω(N) from other solutions—in the binary perceptron under random half-space constraints. By introducing infinitesimal Gaussian perturbations, the authors establish for the first time that any algorithm stable under such perturbations cannot succeed with high probability in finding these strongly isolated solutions, a result that holds without relying on the overlap gap assumption. The analysis combines Pitt’s correlation inequality to characterize concentration properties of perturbed solutions with the low-degree polynomial algorithm framework to bound the success probability of stable algorithms. This yields an upper bound of approximately 0.84233 on the success probability, implying that no stable algorithm can simultaneously achieve high-probability success and reliably identify strongly isolated solutions.
📝 Abstract
We study the binary perceptron, a random constraint satisfaction problem that asks to find a Boolean vector in the intersection of independently chosen random halfspaces. A striking feature of this model is that at every positive constraint density, it is expected that a $1-o_N(1)$ fraction of solutions are \emph{strongly isolated}, i.e. separated from all others by Hamming distance $Ω(N)$. At the same time, efficient algorithms are known to find solutions at certain positive constraint densities. This raises a natural question: can any isolated solution be algorithmically visible?
We answer this in the negative: no algorithm whose output is stable under a tiny Gaussian resampling of the disorder can \emph{reliably} locate isolated solutions. We show that any stable algorithm has success probability at most $\frac{3\sqrt{17}-9}{4}+o_N(1)\leq 0.84233$. Furthermore, every stable algorithm that finds a solution with probability $1-o_N(1)$ finds an isolated solution with probability $o_N(1)$. The class of stable algorithms we consider includes degree-$D$ polynomials up to $D\leq o(N/\log N)$; under the low-degree heuristic \cite{hopkins2018statistical}, this suggests that locating strongly isolated solutions requires running time $\exp(\widetildeΘ(N))$.
Our proof does not use the overlap gap property. Instead, we show via Pitt's correlation inequality that after a random perturbation of the disorder, the number of solutions located close to a pre-existing isolated solution cannot concentrate at $1$.