🤖 AI Summary
This paper addresses the problem of efficiently maintaining a (Δ+1)-coloring in dynamic graphs under edge insertions and deletions, where Δ denotes an upper bound on the current maximum degree. We propose the first unified dynamic coloring framework supporting sequential, parallel, and distributed models. Our method leverages randomized local recoloring combined with batched synchronization. It achieves, for the first time, expected constant-time complexity across all three models: worst-case O(1) update time in the sequential model; expected O(1) work per update and poly(log n) depth in the parallel model; and O(1) uncolored nodes with convergence in O(log n) rounds in the distributed model. The algorithm natively supports batch updates and ensures correctness and efficiency via message pruning and rigorous convergence analysis.
📝 Abstract
We present a simple randomized algorithm that can efficiently maintain a $(Δ+1)$ coloring as the graph undergoes edge insertion and deletion updates, where $Δ$ denotes an upper bound on the maximum degree. A key advantage is the algorithm's ability to process many updates simultaneously, which makes it naturally adaptable to the parallel and distributed models. Concretely, it gives a unified framework across the models, leading to the following results:
- In the sequential setting, the algorithm processes each update in $O(1)$ expected time, worst-case. This matches and strengthens the results of Henzinger and Peng [TALG 2022] and Bhattacharya et al. [TALG 2022], who achieved an $O(1)$ bound but amortized (in expectation and with high probability, respectively), whose work was an improvement of the $O(log Δ)$ expected amortized bound of Bhattacharya et al. [SODA'18].
- In the parallel setting, the algorithm processes each (arbitrary size) batch of updates using $O(1)$ work per update in the batch in expectation, and in $ ext{poly}(log n)$ depth with high probability. This is, in a sense, an ideal parallelization of the above results.
- In the distributed setting, the algorithm can maintain a coloring of the network graph as (potentially many) edges are added or deleted. The maintained coloring is always proper; it may become partial upon updates, i.e., some nodes may temporarily lose their colors, but quickly converges to a full, proper coloring. Concretely, each insertion and deletion causes at most $O(1)$ nodes to become uncolored, but this is resolved within $O(log n)$ rounds with high probability (e.g., in the absence of further updates nearby--the precise guarantee is stronger, but technical). Importantly, the algorithm incurs only $O(1)$ expected message complexity and computation per update.