🤖 AI Summary
This paper investigates the feasibility of recovering a planted clique in an Erdős–Rényi random graph using gradient descent. Addressing the gap between information-theoretic detectability and the limited success of existing black-box optimization methods—which require strong assumptions—we propose a continuous optimization framework based on Lagrangian relaxation: we formulate clique recovery as a quadratic program with graph-structural constraints, introduce Lagrange multipliers for smooth relaxation, and design an associated Hamiltonian. Theoretically, we prove that both standard gradient descent and low-temperature Markov chain dynamics recover the planted clique in polynomial time when the clique size satisfies $k = Omega(sqrt{n})$. Crucially, we establish the pivotal role of initialization: convergence is guaranteed from the all-ones initialization, whereas the all-zeros initialization fails. Furthermore, our algorithm remains robust under edge corruption, significantly extending the known capabilities of gradient-based methods for this fundamental problem.
📝 Abstract
The planted clique problem is a paradigmatic model of statistical-to-computational gaps: the planted clique is information-theoretically detectable if its size $kge 2log_2 n$ but polynomial-time algorithms only exist for the recovery task when $k= Omega(sqrt{n})$. By now, there are many algorithms that succeed as soon as $k = Omega(sqrt{n})$. Glaringly, however, no black-box optimization method, e.g., gradient descent or the Metropolis process, has been shown to work. In fact, Chen, Mossel, and Zadik recently showed that any Metropolis process whose state space is the set of cliques fails to find any sub-linear sized planted clique in polynomial time if initialized naturally from the empty set. We show that using the method of Lagrange multipliers, namely optimizing the Hamiltonian given by the sum of the objective function and the clique constraint over the space of all subgraphs, succeeds. In particular, we prove that Markov chains which minimize this Hamiltonian (gradient descent and a low-temperature relaxation of it) succeed at recovering planted cliques of size $k = Omega(sqrt{n})$ if initialized from the full graph. Importantly, initialized from the empty set, the relaxation still does not help the gradient descent find sub-linear planted cliques. We also demonstrate robustness of these Markov chain approaches under a natural contamination model.