Principled data augmentation for learning to solve quadratic programming problems

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Learning-to-optimize (L2O) for quadratic programming (QP) suffers from data scarcity, limiting generalization and transferability of learned solvers. Method: We propose a principled data augmentation and self-supervised pretraining framework tailored for message-passing graph neural networks (MPNNs). First, we design the first theoretically guaranteed optimality-preserving data augmentation strategy for QP, generating diverse yet feasible synthetic instances. Second, we incorporate contrastive learning to enable self-supervised pretraining of MPNNs, enhancing generalization and cross-task transferability. Contribution/Results: Experiments demonstrate significant performance gains in supervised QP solving. Moreover, the pretrained model successfully transfers to strong branching scoring in branch-and-bound—replacing computationally expensive traditional methods—while maintaining theoretical rigor and practical efficacy.

Technology Category

Application Category

📝 Abstract
Linear and quadratic optimization are crucial in numerous real-world applications, from training machine learning models to integer-linear optimization. Recently, learning-to-optimize methods (L2O) for linear (LPs) or quadratic programs (QPs) using message-passing graph neural networks (MPNNs) have gained traction, promising lightweight, data-driven proxies for solving such optimization problems. For example, they replace the costly computation of strong branching scores in branch-and-bound solvers, requiring solving many such optimization problems. However, robust L2O MPNNs remain challenging in data-scarce settings, especially when addressing complex optimization problems such as QPs. This work introduces a principled approach to data augmentation tailored for QPs via MPNNs. Our method leverages theoretically justified data augmentation techniques to generate diverse yet optimality-preserving instances. Furthermore, we integrate these augmentations into a self-supervised learning framework based on contrastive learning, thereby pretraining MPNNs for enhanced performance on L2O tasks. Extensive experiments demonstrate that our approach improves generalization in supervised scenarios and facilitates effective transfer learning to related optimization problems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing data-scarce learning-to-optimize methods for quadratic programs
Generating diverse, optimality-preserving QP instances via principled augmentation
Improving MPNN generalization and transfer learning for optimization tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data augmentation for QPs via MPNNs
Self-supervised learning with contrastive learning
Optimality-preserving instance generation
🔎 Similar Papers
No similar papers found.