🤖 AI Summary
This study addresses contention resolution among multiple parties seeking conflict-free access to a shared resource, with a focus on the impact of the global clock assumption on latency performance. By designing a novel randomized protocol and employing refined probabilistic analysis together with iterated logarithm techniques, the work establishes—for the first time—that the global clock model yields an asymptotic complexity advantage approaching log n. It further reveals a fundamental trade-off between expected and high-probability latency: both cannot be simultaneously optimized. The proposed protocol achieves an expected latency of O(n(log log n)^{1+o(1)}) and a high-probability latency of Θ(n log²n / log log n), establishing three new upper or lower bounds among four key performance metrics.
📝 Abstract
In the Contention Resolution problem $n$ parties each wish to have exclusive use of a shared resource for one unit of time. The problem has been studied since the early 1970s, under a variety of assumptions on feedback given to the parties, how the parties wake up, knowledge of $n$, and so on. The most consistent assumption is that parties do not have access to a global clock, only their local time since wake-up. This is surprising because the assumption of a global clock is both technologically realistic and algorithmically interesting. It enriches the problem, and opens the door to entirely new techniques. Our primary results are: [1] We design a new Contention Resolution protocol that guarantees latency $$O\left(\left(n\log\log n\log^{(3)} n\log^{(4)} n\cdots \log^{(\log^* n)} n\right)\cdot 2^{\log^* n}\right) \le n(\log\log n)^{1+o(1)}$$ in expectation and with high probability. This already establishes at least a roughly $\log n$ complexity gap between randomized protocols in GlobalClock and LocalClock. [2] Prior analyses of randomized ContentionResolution protocols in LocalClock guaranteed a certain latency with high probability, i.e., with probability $1-1/\text{poly}(n)$. We observe that it is just as natural to measure expected latency, and prove a $\log n$-factor complexity gap between the two objectives for memoryless protocols. The In-Expectation complexity is $\Theta(n \log n/\log\log n)$ whereas the With-High-Probability latency is $\Theta(n\log^2 n/\log\log n)$. Three of these four upper and lower bounds are new. [3] Given the complexity separation above, one would naturally want a ContentionResolution protocol that is optimal under both the In-Expectation and With-High-Probability metrics. This is impossible! It is even impossible to achieve In-Expectation latency $o(n\log^2 n/(\log\log n)^2)$ and With-High-Probability latency $n\log^{O(1)} n$ simultaneously.