๐ค AI Summary
This work investigates whether the low-degree heuristic can rigorously rule out the distinguishing power of specific efficient algorithms in average-case settings. By establishing, for the first time, a formal theoretical link between the low-degree likelihood ratio (LDLR) and algorithmic performance, the study converts LDLR upper bounds into rigorous lower bounds for concrete classes of algorithmsโsuch as those based on symmetric polynomials and subgraph statistics. Leveraging tools from low-degree moment analysis, the theory of symmetric polynomials, and Gaussian or uniform noise models, the authors demonstrate that under various distributional assumptions, if a problem exhibits low-degree indistinguishability, then its noisy variant cannot be effectively distinguished by these algorithms. This result provides the first formal justification for the validity of the low-degree heuristic.
๐ Abstract
Over the past decade, the low-degree heuristic has been used to estimate the algorithmic thresholds for a wide range of average-case planted vs null distinguishing problems. Such results rely on the hypothesis that if the low-degree moments of the planted and null distributions are sufficiently close, then no efficient (noise-tolerant) algorithm can distinguish between them. This hypothesis is appealing due to the simplicity of calculating the low-degree likelihood ratio (LDLR) -- a quantity that measures the similarity between low-degree moments. However, despite sustained interest in the area, it remains unclear whether low-degree indistinguishability actually rules out any interesting class of algorithms. In this work, we initiate the study and develop technical tools for translating LDLR upper bounds to rigorous lower bounds against concrete algorithms. As a consequence, we prove: for any permutation-invariant distribution $\mathsf{P}$, 1. If $\mathsf{P}$ is over $\{0,1\}^n$ and is low-degree indistinguishable from $U = \mathrm{Unif}(\{0,1\}^n)$, then a noisy version of $\mathsf{P}$ is statistically indistinguishable from $U$. 2. If $\mathsf{P}$ is over $\mathbb{R}^n$ and is low-degree indistinguishable from the standard Gaussian ${N}(0, 1)^n$, then no statistic based on symmetric polynomials of degree at most $O(\log n/\log \log n)$ can distinguish between a noisy version of $\mathsf{P}$ from ${N}(0, 1)^n$. 3. If $\mathsf{P}$ is over $\mathbb{R}^{n\times n}$ and is low-degree indistinguishable from ${N}(0,1)^{n\times n}$, then no constant-sized subgraph statistic can distinguish between a noisy version of $\mathsf{P}$ and ${N}(0, 1)^{n\times n}$.