š¤ AI Summary
This paper studies sparsification of random constraint satisfaction problems (CSPs): given a CSP instance, find a reweighted version with minimal number of constraints that preserves, within a multiplicative factor of (1±ε), the fraction of satisfied constraints under every assignment. Focusing on the r-partite and uniform random models, the analysis integrates tools from random graph theory, i.i.d. edge sampling, and predicate structure analysis, supported by probabilistic and combinatorial arguments. The main contributions are threefold: (1) identifying a sharp sparsification threshold in the r-partite model and asymptotically approaching the optimal sparsity bound; (2) revealing non-monotonic sparsifiability in the uniform model, and characterizing intricate phase transitions in the constraint density range nįµ to nįµāŗĀ¹; (3) proposing a computationally tractable criterion based on algebraic properties of predicates, enabling the first exact classification of sparsifiability across diverse predicates.
š Abstract
The problem of CSP sparsification asks: for a given CSP instance, what is the sparsest possible reweighting such that for every possible assignment to the instance, the number of satisfied constraints is preserved up to a factor of $1 pm ε$? We initiate the study of the sparsification of random CSPs. In particular, we consider two natural random models: the $r$-partite model and the uniform model. In the $r$-partite model, CSPs are formed by partitioning the variables into $r$ parts, with constraints selected by randomly picking one vertex out of each part. In the uniform model, $r$ distinct vertices are chosen at random from the pool of variables to form each constraint.
In the $r$-partite model, we exhibit a sharp threshold phenomenon. For every predicate $P$, there is an integer $k$ such that a random instance on $n$ vertices and $m$ edges cannot (essentially) be sparsified if $m le n^k$ and can be sparsified to size $approx n^k$ if $m ge n^k$. Here, $k$ corresponds to the largest copy of the AND which can be found within $P$. Furthermore, these sparsifiers are simple, as they can be constructed by i.i.d. sampling of the edges.
In the uniform model, the situation is a bit more complex. For every predicate $P$, there is an integer $k$ such that a random instance on $n$ vertices and $m$ edges cannot (essentially) be sparsified if $m le n^k$ and can sparsified to size $approx n^k$ if $m ge n^{k+1}$. However, for some predicates $P$, if $m in [n^k, n^{k+1}]$, there may or may not be a nontrivial sparsifier. In fact, we show that there are predicates where the sparsifiability of random instances is non-monotone, i.e., as we add more random constraints, the instances become more sparsifiable. We give a precise (efficiently computable) procedure for determining which situation a specific predicate $P$ falls into.