🤖 AI Summary
This work addresses the real-time computation of maximum independent sets (MaxIS) on dynamic graphs. We propose the first unsupervised learning framework for this problem, integrating graph neural networks (GNNs) for structural representation learning with a learnable, distributed node-state update mechanism. Our method enables single-step, parallel, and localized membership inference under edge insertions and deletions. A key innovation is neighborhood-radius parameterization, which endows the model with strong generalization—from training on small graphs (100–1,000 nodes) to inference on graphs up to 100× larger. On dynamic graphs with 100–10,000 nodes, our approach achieves higher solution quality than existing heuristic-learning methods, while accelerating greedy baselines by 1.5–23× and reducing memory consumption. When generalized to ultra-large-scale graphs, it matches the performance of commercial MIP solvers—marking the first method to break the long-standing accuracy–efficiency trade-off in online MaxIS solving on dynamic graphs.
📝 Abstract
We present the first unsupervised learning model for finding Maximum Independent Sets (MaxIS) in dynamic graphs where edges change over time. Our method combines structural learning from graph neural networks (GNNs) with a learned distributed update mechanism that, given an edge addition or deletion event, modifies nodes' internal memories and infers their MaxIS membership in a single, parallel step. We parameterize our model by the update mechanism's radius and investigate the resulting performance-runtime tradeoffs for various dynamic graph topologies. We evaluate our model against state-of-the-art MaxIS methods for static graphs, including a mixed integer programming solver, deterministic rule-based algorithms, and a heuristic learning framework based on dynamic programming and GNNs. Across synthetic and real-world dynamic graphs of 100-10,000 nodes, our model achieves competitive approximation ratios with excellent scalability; on large graphs, it significantly outperforms the state-of-the-art heuristic learning framework in solution quality, runtime, and memory usage. Our model generalizes well on graphs 100x larger than the ones used for training, achieving performance at par with both a greedy technique and a commercial mixed integer programming solver while running 1.5-23x faster than greedy.