🤖 AI Summary
This work addresses the lack of feasibility guarantees in message-passing neural networks (MPNNs) when solving convex optimization problems with linear constraints. Methodologically, we propose the first provably feasible iterative MPNN framework, explicitly modeling interior-point method (IPM) dynamics to strictly confine the search within the feasible region; a feasible-initialization-driven iteration mechanism ensures that every intermediate output satisfies all constraints. Theoretically, this is the first MPNN formulation with a rigorous proof of IPM simulation. Experimentally, our approach significantly outperforms existing neural baselines in both solution quality and strict feasibility, exhibits strong generalization across problem scales, and—on certain instances—even surpasses commercial solvers such as Gurobi in runtime.
📝 Abstract
Recently, message-passing graph neural networks (MPNNs) have shown potential for solving combinatorial and continuous optimization problems due to their ability to capture variable-constraint interactions. While existing approaches leverage MPNNs to approximate solutions or warm-start traditional solvers, they often lack guarantees for feasibility, particularly in convex optimization settings. Here, we propose an iterative MPNN framework to solve convex optimization problems with provable feasibility guarantees. First, we demonstrate that MPNNs can provably simulate standard interior-point methods for solving quadratic problems with linear constraints, covering relevant problems such as SVMs. Secondly, to ensure feasibility, we introduce a variant that starts from a feasible point and iteratively restricts the search within the feasible region. Experimental results show that our approach outperforms existing neural baselines in solution quality and feasibility, generalizes well to unseen problem sizes, and, in some cases, achieves faster solution times than state-of-the-art solvers such as Gurobi.