Learning-Augmented Algorithms for Boolean Satisfiability

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how machine learning–derived predictions (“advice”) can improve the worst-case algorithmic performance for SAT decision and optimization problems. We propose two structured advice models: (i) subset advice—revealing correct assignments for an ε-fraction of variables in an optimal solution, chosen uniformly at random; and (ii) label advice—providing noisy predictions of optimal assignments for all variables. We introduce the first learning-augmented framework for SAT, integrating PPSZ, randomized analysis, and graph-density–aware approximation techniques into robust algorithms. Theoretically, we achieve: (i) an improved k-SAT decision time with base of the exponent reduced to 2⁻ᶜ; (ii) a MAX-2-SAT approximation ratio of 0.94 + Ω(ε); and (iii) near-optimal solutions on instances with high average degree. Our core contribution is establishing the first rigorous theoretical bridge between SAT and learning-augmented algorithms, yielding multiple breakthroughs in worst-case guarantees for both decision and optimization variants.

Technology Category

Application Category

📝 Abstract
Learning-augmented algorithms are a prominent recent development in beyond worst-case analysis. In this framework, a problem instance is provided with a prediction (``advice'') from a machine-learning oracle, which provides partial information about an optimal solution, and the goal is to design algorithms that leverage this advice to improve worst-case performance. We study the classic Boolean satisfiability (SAT) decision and optimization problems within this framework using two forms of advice. ``Subset advice"provides a random $epsilon$ fraction of the variables from an optimal assignment, whereas ``label advice"provides noisy predictions for all variables in an optimal assignment. For the decision problem $k$-SAT, by using the subset advice we accelerate the exponential running time of the PPSZ family of algorithms due to Paturi, Pudlak, Saks and Zane, which currently represent the state of the art in the worst case. We accelerate the running time by a multiplicative factor of $2^{-c}$ in the base of the exponent, where $c$ is a function of $epsilon$ and $k$. For the optimization problem, we show how to incorporate subset advice in a black-box fashion with any $alpha$-approximation algorithm, improving the approximation ratio to $alpha + (1 - alpha)epsilon$. Specifically, we achieve approximations of $0.94 + Omega(epsilon)$ for MAX-$2$-SAT, $7/8 + Omega(epsilon)$ for MAX-$3$-SAT, and $0.79 + Omega(epsilon)$ for MAX-SAT. Moreover, for label advice, we obtain near-optimal approximation for instances with large average degree, thereby generalizing recent results on MAX-CUT and MAX-$2$-LIN.
Problem

Research questions and friction points this paper is trying to address.

Enhancing SAT algorithms with machine-learning predictions for efficiency
Improving approximation ratios in optimization using subset advice
Achieving near-optimal solutions with noisy label advice
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses subset advice for partial optimal variables
Incorporates label advice for noisy predictions
Accelerates PPSZ algorithms with subset advice
🔎 Similar Papers
No similar papers found.