🤖 AI Summary
This paper investigates the efficient inference of the Boltzmann parameter β for a k-SAT formula Φ from a single satisfying assignment σ, where σ is drawn with probability ∝ e^{βm(σ)} and m(σ) denotes the number of satisfied literals. Focusing on formulas with bounded variable occurrence degree d, we employ probabilistic constructions, information-theoretic lower bounds, the Markov random field (MRF) single-sample learning framework, and analysis of simplification algorithms. Our key contributions are: (i) the first proof that single-sample learning becomes information-theoretically impossible strictly before the k-SAT satisfiability threshold; (ii) the establishment of a novel critical relationship between β and d, refuting the conjecture that their thresholds coincide. Specifically, learning is feasible when d ≲ 2^{k/2} (nearly optimal for β → 0), but impossible when d = Θ(k²) and β is large; this impossibility extends to exponentially large d under small β.
📝 Abstract
Consider a $k$-SAT formula $Phi$ where every variable appears at most $d$ times, and let $sigma$ be a satisfying assignment of $Phi$ sampled proportionally to $e^{eta m(sigma)}$ where $m(sigma)$ is the number of variables set to true and $eta$ is a real parameter. Given $Phi$ and $sigma$, can we learn the value of $eta$ efficiently? This problem falls into a recent line of works about single-sample ("one-shot") learning of Markov random fields. The $k$-SAT setting we consider here was recently studied by Galanis, Kandiros, and Kalavasis (SODA'24) where they showed that single-sample learning is possible when roughly $dleq 2^{k/6.45}$ and impossible when $dgeq (k+1) 2^{k-1}$. Crucially, for their impossibility results they used the existence of unsatisfiable instances which, aside from the gap in $d$, left open the question of whether the feasibility threshold for one-shot learning is dictated by the satisfiability threshold of $k$-SAT formulas of bounded degree. Our main contribution is to answer this question negatively. We show that one-shot learning for $k$-SAT is infeasible well below the satisfiability threshold; in fact, we obtain impossibility results for degrees $d$ as low as $k^2$ when $eta$ is sufficiently large, and bootstrap this to small values of $eta$ when $d$ scales exponentially with $k$, via a probabilistic construction. On the positive side, we simplify the analysis of the learning algorithm and obtain significantly stronger bounds on $d$ in terms of $eta$. In particular, for the uniform case $eta
ightarrow 0$ that has been studied extensively in the sampling literature, our analysis shows that learning is possible under the condition $dlesssim 2^{k/2}$. This is nearly optimal (up to constant factors) in the sense that it is known that sampling a uniformly-distributed satisfying assignment is NP-hard for $dgtrsim 2^{k/2}$.