🤖 AI Summary
Many combinatorial optimization (CO) applications demand rapid generation of high-quality feasible solutions. Method: This paper proposes a data-driven framework that enhances classical approximation algorithms by leveraging graph neural networks (GNNs) to dynamically tune their parameters—while rigorously preserving solution feasibility. Contribution/Results: We introduce preference-based gradient estimation, the first technique enabling end-to-end, self-supervised, differentiable training of black-box approximation algorithms. This bridges the gap between data-adaptive learning and structural guarantees inherent in classical algorithms. Evaluated on the Traveling Salesman Problem (TSP) and Minimum k-Cut, our approach matches state-of-the-art learning-based solvers in solution quality, substantially outperforms the original approximation algorithms, and guarantees feasibility without post-processing.
📝 Abstract
Combinatorial optimization (CO) problems arise in a wide range of fields from medicine to logistics and manufacturing. While exact solutions are often not necessary, many applications require finding high-quality solutions quickly. For this purpose, we propose a data-driven approach to improve existing non-learned approximation algorithms for CO. We parameterize the approximation algorithm and train a graph neural network (GNN) to predict parameter values that lead to the best possible solutions. Our pipeline is trained end-to-end in a self-supervised fashion using gradient estimation, treating the approximation algorithm as a black box. We propose a novel gradient estimation scheme for this purpose, which we call preference-based gradient estimation. Our approach combines the benefits of the neural network and the non-learned approximation algorithm: The GNN leverages the information from the dataset to allow the approximation algorithm to find better solutions, while the approximation algorithm guarantees that the solution is feasible. We validate our approach on two well-known combinatorial optimization problems, the travelling salesman problem and the minimum k-cut problem, and show that our method is competitive with state of the art learned CO solvers.