Preference-Based Gradient Estimation for ML-Based Approximate Combinatorial Optimization

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Many combinatorial optimization (CO) applications demand rapid generation of high-quality feasible solutions. Method: This paper proposes a data-driven framework that enhances classical approximation algorithms by leveraging graph neural networks (GNNs) to dynamically tune their parameters—while rigorously preserving solution feasibility. Contribution/Results: We introduce preference-based gradient estimation, the first technique enabling end-to-end, self-supervised, differentiable training of black-box approximation algorithms. This bridges the gap between data-adaptive learning and structural guarantees inherent in classical algorithms. Evaluated on the Traveling Salesman Problem (TSP) and Minimum k-Cut, our approach matches state-of-the-art learning-based solvers in solution quality, substantially outperforms the original approximation algorithms, and guarantees feasibility without post-processing.

Technology Category

Application Category

📝 Abstract
Combinatorial optimization (CO) problems arise in a wide range of fields from medicine to logistics and manufacturing. While exact solutions are often not necessary, many applications require finding high-quality solutions quickly. For this purpose, we propose a data-driven approach to improve existing non-learned approximation algorithms for CO. We parameterize the approximation algorithm and train a graph neural network (GNN) to predict parameter values that lead to the best possible solutions. Our pipeline is trained end-to-end in a self-supervised fashion using gradient estimation, treating the approximation algorithm as a black box. We propose a novel gradient estimation scheme for this purpose, which we call preference-based gradient estimation. Our approach combines the benefits of the neural network and the non-learned approximation algorithm: The GNN leverages the information from the dataset to allow the approximation algorithm to find better solutions, while the approximation algorithm guarantees that the solution is feasible. We validate our approach on two well-known combinatorial optimization problems, the travelling salesman problem and the minimum k-cut problem, and show that our method is competitive with state of the art learned CO solvers.
Problem

Research questions and friction points this paper is trying to address.

Improve non-learned approximation algorithms for CO
Train GNN to predict optimal algorithm parameters
Validate on traveling salesman and minimum k-cut problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses graph neural networks for parameter prediction.
Implements preference-based gradient estimation technique.
Combines neural networks with non-learned algorithms.
A
Arman Mielke
ETAS Research, Stuttgart, Germany; Computer Science Department, University of Stuttgart, Germany; Max Planck Research School for Intelligent Systems (IMPRS-IS)
U
Uwe Bauknecht
ETAS Research, Stuttgart, Germany
Thilo Strauss
Thilo Strauss
Associate Professor, Xi’an Jiaotong-Liverpool University (XJTLU)
Machine LearningIndustrial ApplicationsNeural NetworksStatistical Inverse ProblemsInnovation
Mathias Niepert
Mathias Niepert
University of Stuttgart & NEC Labs Europe
Machine learning