🤖 AI Summary
This paper addresses the parallel batch update problem for core decomposition in dynamic graphs. We propose the first dynamic parallel algorithm with worst-case time guarantees—unlike prior approaches that only offer amortized complexity bounds. Our algorithm supports batch edge insertions and deletions of arbitrary size, processing a batch of size $b$ in $ ilde{O}(b/p)$ worst-case time on $p$ processors. It achieves total work $b cdot ext{poly}(log n)$ and polylogarithmic parallel depth $ ext{poly}(log n)$, yielding near-optimal parallel efficiency. To the best of our knowledge, this is the first core decomposition algorithm that simultaneously guarantees worst-case performance, batch-update capability, and high parallel scalability. The result significantly advances both theoretical robustness—by eliminating reliance on amortization—and practical applicability—through efficient handling of real-world bursty graph updates.
📝 Abstract
We present the first parallel batch-dynamic algorithm for approximating coreness decomposition with worst-case update times. Given any batch of edge insertions and deletions, our algorithm processes all these updates in $ ext{poly}(log n)$ depth, using a worst-case work bound of $bcdot ext{poly}(log n)$ where $b$ denotes the batch size. This means the batch gets processed in $ ilde{O}(b/p)$ time, given $p$ processors, which is optimal up to logarithmic factors. Previously, an algorithm with similar guarantees was known by the celebrated work of Liu, Shi, Yu, Dhulipala, and Shun [SPAA'22], but with the caveat of the work bound, and thus the runtime, being only amortized.