🤖 AI Summary
This paper studies the fully dynamic set cover problem: maintaining an approximate optimal solution under element insertions and deletions, while minimizing both recourse (the number of set changes per update) and worst-case update time. We present the first algorithm with provable worst-case guarantees, simultaneously achieving $O(log n)$ amortized recourse and $f cdot mathrm{polylog}(n)$ worst-case update time. Our approach integrates hierarchical hashing, lazy updates, and offline approximation techniques to construct a dynamic data structure online. The algorithm attains optimal approximation ratios of $O(log n)$ and $O(f)$—matching the best-known offline bounds—and thus achieves optimal solution quality in both regimes. Crucially, unlike prior work—which only provided amortized recourse guarantees or required trade-offs between recourse and update time—our result is the first to attain *dual worst-case optimality*: both recourse and update time are bounded in the worst case, without sacrificing approximation quality.
📝 Abstract
In (fully) dynamic set cover, the goal is to maintain an approximately optimal solution to a dynamically evolving instance of set cover, where in each step either an element is added to or removed from the instance. The two main desiderata of a dynamic set cover algorithm are to minimize at each time-step, the recourse, which is the number of sets removed from or added to the solution, and the update time to compute the updated solution. This problem has been extensively studied over the last decade leading to many results that achieve ever-improving bounds on the recourse and update time, while maintaining a solution whose cost is comparable to that of offline approximation algorithms. In this paper, we give the first algorithms to simultaneously achieve non-trivial worst-case bounds for recourse and update time. Specifically, we give fully-dynamic set cover algorithms that simultaneously achieve $O(log n)$ recourse and $fcdot extrm{poly}log(n)$ update time in the worst-case, for both approximation regimes: $O(log n)$ and $O(f)$ approximation. (Here, $n, f$ respectively denote the maximum number of elements and maximum frequency of an element across all instances.) Prior to our work, all results for this problem either settled for amortized bounds on recourse and update time, or obtained $fcdot extrm{poly}log(n)$ update time in the worst-case but at the cost of $Omega(m)$ worst-case recourse. (Here, $m$ denotes the number of sets. Note that any algorithm has recourse at most $m$.)