Low degree conjecture implies sharp computational thresholds in stochastic block model

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the computational limits of weak community recovery in the symmetric stochastic block model (SBM) under polynomial-time algorithms. Leveraging the low-degree conjecture, we rigorously establish the Kesten–Stigum (KS) threshold as the sharp computational phase transition: below it, no polynomial-time algorithm achieves nontrivial correlation with the true communities; above it, existing algorithms attain constant correlation. We provide the first rigorous evidence of a sharp phase transition in recovery performance precisely at the KS threshold and extend low-degree moment lower bounds to the regime where the number of blocks diverges, confirming the persistence of the computational-statistical gap. Methodologically, our analysis integrates low-degree moment calculations, graph splitting and cross-validation, and correlation-preserving projections, thereby systematically uncovering the intrinsic computational hardness in parameter learning for SBMs.

Technology Category

Application Category

📝 Abstract
We investigate implications of the (extended) low-degree conjecture (recently formalized in [MW23]) in the context of the symmetric stochastic block model. Assuming the conjecture holds, we establish that no polynomial-time algorithm can weakly recover community labels below the Kesten-Stigum (KS) threshold. In particular, we rule out polynomial-time estimators that, with constant probability, achieve correlation with the true communities that is significantly better than random. Whereas, above the KS threshold, polynomial-time algorithms are known to achieve constant correlation with the true communities with high probability[Mas14,AS15]. To our knowledge, we provide the first rigorous evidence for the sharp transition in recovery rate for polynomial-time algorithms at the KS threshold. Notably, under a stronger version of the low-degree conjecture, our lower bound remains valid even when the number of blocks diverges. Furthermore, our results provide evidence of a computational-to-statistical gap in learning the parameters of stochastic block models. In contrast to prior work, which either (i) rules out polynomial-time algorithms for hypothesis testing with 1-o(1) success probability [Hopkins18, BBK+21a] under the low-degree conjecture, or (ii) rules out low-degree polynomials for learning the edge connection probability matrix [LG23], our approach provides stronger lower bounds on the recovery and learning problem. Our proof combines low-degree lower bounds from [Hopkins18, BBK+21a] with graph splitting and cross-validation techniques. In order to rule out general recovery algorithms, we employ the correlation preserving projection method developed in [HS17].
Problem

Research questions and friction points this paper is trying to address.

Establishes sharp computational thresholds in stochastic block models.
Proves polynomial-time algorithms fail below Kesten-Stigum threshold.
Demonstrates computational-to-statistical gap in parameter learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-degree conjecture implications
Graph splitting techniques
Correlation preserving projection method
🔎 Similar Papers
No similar papers found.