🤖 AI Summary
This paper addresses the NP-hard Maximum Weight Independent Set (MWIS) problem on large-scale graphs by proposing the first distributed-memory parallel reduction framework. Methodologically, it introduces an asynchronous two-phase strategy—reduce-and-peel and reduce-and-greedy—that integrates vertex peeling with greedy construction, augmented by multiple graph-structured reduction rules for efficient parallel pruning. The key contribution is the first extension of the reduction paradigm to distributed-memory systems, enabling scalable MWIS computation on massive graphs. Experimental results demonstrate a 33× average speedup on a 1024-core cluster and successful solution of graphs with up to 1 billion vertices and 17 billion edges, achieving solution quality close to optimal.
📝 Abstract
Finding maximum-weight independent sets in graphs is an important NP-hard optimization problem. Given a vertex-weighted graph $G$, the task is to find a subset of pairwise non-adjacent vertices of $G$ with maximum weight. Most recently published practical exact algorithms and heuristics for this problem use a variety of data-reduction rules to compute (near-)optimal solutions. Applying these rules results in an equivalent instance of reduced size. An optimal solution to the reduced instance can be easily used to construct an optimal solution for the original input.
In this work, we present the first distributed-memory parallel reduction algorithms for this problem, targeting graphs beyond the scale of previous sequential approaches. Furthermore, we propose the first distributed reduce-and-greedy and reduce-and-peel algorithms for finding a maximum weight independent set heuristically.
In our practical evaluation, our experiments on up to $1024$ processors demonstrate good scalability of our distributed reduce algorithms while maintaining good reduction impact. Our asynchronous reduce-and-peel approach achieves an average speedup of $33 imes$ over a sequential state-of-the-art reduce-and-peel approach on 36 real-world graphs with a solution quality close to the sequential algorithm. Our reduce-and-greedy algorithms even achieve average speedups of up to $50 imes$ at the cost of a lower solution quality. Moreover, our distributed approach allows us to consider graphs with more than one billion vertices and 17 billion edges.