CoCo-MILP: Inter-Variable Contrastive and Intra-Constraint Competitive MILP Solution Prediction

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GNN-based methods for mixed-integer linear programming (MILP) suffer from a dual mismatch: binary cross-entropy (BCE) loss ignores the relative priority among variables, while standard message-passing mechanisms fail to capture competitive relationships among variables within the same constraint. To address this, we propose CoCo-MILP—a novel framework that unifies variable-level contrastive learning with intra-constraint competition modeling in GNNs. Specifically, we design a contrastive loss to encode variable selection ordering and introduce a competition-aware graph neural network layer that explicitly models mutual exclusivity among variables in constraint-induced subgraphs. Additionally, CoCo-MILP integrates embedding boundary maximization and feature differentiation mechanisms. Evaluated on standard MILP benchmarks, CoCo-MILP significantly improves branching prediction quality, reducing the average optimality gap by 68.12% over state-of-the-art learning-based solvers and substantially accelerating overall MILP solving.

Technology Category

Application Category

📝 Abstract
Mixed-Integer Linear Programming (MILP) is a cornerstone of combinatorial optimization, yet solving large-scale instances remains a significant computational challenge. Recently, Graph Neural Networks (GNNs) have shown promise in accelerating MILP solvers by predicting high-quality solutions. However, we identify that existing methods misalign with the intrinsic structure of MILP problems at two levels. At the leaning objective level, the Binary Cross-Entropy (BCE) loss treats variables independently, neglecting their relative priority and yielding plausible logits. At the model architecture level, standard GNN message passing inherently smooths the representations across variables, missing the natural competitive relationships within constraints. To address these challenges, we propose CoCo-MILP, which explicitly models inter-variable Contrast and intra-constraint Competition for advanced MILP solution prediction. At the objective level, CoCo-MILP introduces the Inter-Variable Contrastive Loss (VCL), which explicitly maximizes the embedding margin between variables assigned one versus zero. At the architectural level, we design an Intra-Constraint Competitive GNN layer that, instead of homogenizing features, learns to differentiate representations of competing variables within a constraint, capturing their exclusionary nature. Experimental results on standard benchmarks demonstrate that CoCo-MILP significantly outperforms existing learning-based approaches, reducing the solution gap by up to 68.12% compared to traditional solvers. Our code is available at https://github.com/happypu326/CoCo-MILP.
Problem

Research questions and friction points this paper is trying to address.

Addresses misalignment in MILP solution prediction objectives and architectures
Models inter-variable contrast and intra-constraint competition relationships
Improves solution quality for large-scale combinatorial optimization problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inter-variable contrastive loss maximizes embedding margin
Intra-constraint competitive GNN differentiates competing variables
Combines contrastive learning with competitive message passing
🔎 Similar Papers
No similar papers found.
T
Tianle Pu
College of Systems Engineering, National University of Defense Technology
J
Jianing Li
College of Systems Engineering, National University of Defense Technology
Y
Yingying Gao
College of Systems Engineering, National University of Defense Technology
Shixuan Liu
Shixuan Liu
National University of Defense Technology
Knowledge ReasoningDomain GeneralizationCausal InferenceData Engineering
Z
Zijie Geng
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
H
Haoyang Liu
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
C
Chao Chen
College of Systems Engineering, National University of Defense Technology
Changjun Fan
Changjun Fan
Associate Professor, National University of Defense Technology
graph neural networkcombinatorial optimizationreinforcement learning