🤖 AI Summary
Computing the graph domination number—an NP-hard combinatorial invariant—is computationally prohibitive for large graphs.
Method: This paper proposes an end-to-end graph neural network (GNN)-based regression framework that directly models graph structure and message-passing dynamics, bypassing conventional CNN approaches that rely on adjacency matrix representations.
Contribution/Results: Evaluated on 2,000 random graphs with up to 64 vertices, the model achieves an R² score of 0.987 and a mean absolute error (MAE) of 0.372, significantly outperforming CNN baselines. Inference is over 200× faster than exact algorithms. These results demonstrate that GNNs serve as highly effective, generalizable surrogate models for NP-hard graph invariants, establishing a novel machine learning paradigm for approximating combinatorially intractable graph parameters.
📝 Abstract
We investigate machine learning approaches to approximating the emph{domination number} of graphs, the minimum size of a dominating set. Exact computation of this parameter is NP-hard, restricting classical methods to small instances. We compare two neural paradigms: Convolutional Neural Networks (CNNs), which operate on adjacency matrix representations, and Graph Neural Networks (GNNs), which learn directly from graph structure through message passing. Across 2,000 random graphs with up to 64 vertices, GNNs achieve markedly higher accuracy ($R^2=0.987$, MAE $=0.372$) than CNNs ($R^2=0.955$, MAE $=0.500$). Both models offer substantial speedups over exact solvers, with GNNs delivering more than $200 imes$ acceleration while retaining near-perfect fidelity. Our results position GNNs as a practical surrogate for combinatorial graph invariants, with implications for scalable graph optimization and mathematical discovery.