🤖 AI Summary
This paper addresses several NP-hard graph problems—including MaxCut, Vertex Cover, Set Cover, and Maximum Independent Set—by proposing a novel approximation framework leveraging weak predictive information. Methodologically, it introduces an edge-level two-bit ε-prediction model indicating whether each endpoint belongs to an optimal solution; this enables the first unified yet discriminative modeling of high- and low-degree vertices, integrated via graph decomposition and randomized analysis. Crucially, the algorithm achieves provable performance improvements under only ε-correlation—a minimal statistical dependence assumption between predictions and optimal solutions—thereby strictly surpassing classical approximation lower bounds. The framework is unified across multiple problems, robust to prediction quality, and yields theoretically guaranteed approximation ratios. This work significantly extends the theoretical foundations and applicability of learning-augmented optimization in combinatorial optimization, establishing new performance frontiers for prediction-based algorithms under weak predictability assumptions.
📝 Abstract
We design improved approximation algorithms for NP-hard graph problems by incorporating predictions (e.g., learned from past data). Our prediction model builds upon and extends the $varepsilon$-prediction framework by Cohen-Addad, d'Orsi, Gupta, Lee, and Panigrahi (NeurIPS 2024). We consider an edge-based version of this model, where each edge provides two bits of information, corresponding to predictions about whether each of its endpoints belong to an optimal solution. Even with weak predictions where each bit is only $varepsilon$-correlated with the true solution, this information allows us to break approximation barriers in the standard setting. We develop algorithms with improved approximation ratios for MaxCut, Vertex Cover, Set Cover, and Maximum Independent Set problems (among others). Across these problems, our algorithms share a unifying theme, where we separately satisfy constraints related to high degree vertices (using predictions) and low-degree vertices (without using predictions) and carefully combine the answers.