UniHetCO: A Unified Heterogeneous Representation for Multi-Problem Learning in Unsupervised Neural Combinatorial Optimization

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes UniHetCO, a unified unsupervised framework for jointly learning multiple classes of constrained quadratic combinatorial optimization problems, addressing the limited generalizability of existing unsupervised neural methods that typically target individual tasks. UniHetCO constructs a heterogeneous graph representation to jointly encode problem structure, objective functions, and linear constraints. To enhance training stability, it introduces a gradient-norm-based dynamic loss weighting mechanism. Experimental results demonstrate that UniHetCO achieves performance comparable to state-of-the-art unsupervised methods across four distinct node subset selection tasks—including maximum clique and maximum independent set—while exhibiting strong cross-problem transferability. Furthermore, the method provides effective warm starts for commercial solvers, significantly accelerating their convergence.

Technology Category

Application Category

📝 Abstract
Unsupervised neural combinatorial optimization (NCO) offers an appealing alternative to supervised approaches by training learning-based solvers without ground-truth solutions, directly minimizing instance objectives and constraint violations. Yet for graph node subset-selection problems (e.g., Maximum Clique and Maximum Independent Set), existing unsupervised methods are typically specialized to a single problem class and rely on problem-specific surrogate losses, which hinders learning across classes within a unified framework. In this work, we propose UniHetCO, a unified heterogeneous graph representation for constrained quadratic programming-based combinatorial optimization that encodes problem structure, objective terms, and linear constraints in a single input. This formulation enables training a single model across multiple problem classes with a unified label-free objective. To improve stability under multi-problem learning, we employ a gradient-norm-based dynamic weighting scheme that alleviates gradient imbalance among classes. Experiments on multiple datasets and four constrained problem classes demonstrate competitive performance with state-of-the-art unsupervised NCO baselines, strong cross-problem adaptation potential, and effective warm starts for a commercial classical solver under tight time limits.
Problem

Research questions and friction points this paper is trying to address.

unsupervised neural combinatorial optimization
graph node subset-selection
multi-problem learning
unified framework
heterogeneous representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

unified heterogeneous representation
unsupervised neural combinatorial optimization
multi-problem learning
gradient-norm dynamic weighting
constrained quadratic programming
🔎 Similar Papers
No similar papers found.