Vizing's Theorem in Deterministic Almost-Linear Time

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vizing’s theorem guarantees a $(Delta+1)$-edge-coloring for any graph with maximum degree $Delta$. Prior deterministic algorithms required $ ilde{O}(msqrt{n})$ time, while randomized approaches had nearly reached linear time—yet a deterministic linear (or even nearly linear) algorithm remained elusive. This work presents the first deterministic nearly linear-time algorithm for $(Delta+1)$-edge-coloring, running in $m^{1+o(1)}$ time and thus breaking the long-standing $ ilde{O}(msqrt{n})$ barrier. The core technical innovation is a novel deterministic color-class sparsification technique that entirely avoids sublinear-time subroutines. Integrated within a divide-and-conquer framework and coupled with an efficient coloring scheduling mechanism, it achieves a time complexity of $m cdot 2^{O(sqrt{log Delta})} cdot log n$. This result constitutes a major theoretical advance in deterministic edge coloring, resolving a central open problem in distributed and combinatorial optimization.

Technology Category

Application Category

📝 Abstract
Vizing's theorem states that any $n$-vertex $m$-edge graph of maximum degree $Δ$ can be edge colored using at most $Δ+ 1$ different colors. Vizing's original proof is easily translated into a deterministic $O(mn)$ time algorithm. This deterministic time bound was subsequently improved to $ ilde O(m sqrt n)$ time, independently by [Arjomandi, 1982] and by [Gabow et al., 1985]. A series of recent papers improved the time bound of $ ilde O(msqrt{n})$ using randomization, culminating in the randomized near-linear time $(Δ+1)$-coloring algorithm by [Assadi, Behnezhad, Bhattacharya, Costa, Solomon, and Zhang, 2025]. At the heart of all of these recent improvements, there is some form of a sublinear time algorithm. Unfortunately, sublinear time algorithms as a whole almost always require randomization. This raises a natural question: can the deterministic time complexity of the problem be reduced below the $ ilde O(msqrt{n})$ barrier? In this paper, we answer this question in the affirmative. We present a deterministic almost-linear time $(Δ+1)$-coloring algorithm, namely, an algorithm running in $m cdot 2^{O(sqrt{log Δ})} cdot log n = m^{1+o(1)}$ time. Our main technical contribution is to entirely forego sublinear time algorithms. We do so by presenting a new deterministic color-type sparsification approach that runs in almost-linear (instead of sublinear) time, but can be used to color a much larger set of edges.
Problem

Research questions and friction points this paper is trying to address.

Develop deterministic almost-linear time edge coloring algorithm
Overcome the deterministic time complexity barrier of O(m√n)
Achieve (Δ+1)-coloring without using sublinear time techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deterministic almost-linear time edge coloring algorithm
New color-type sparsification approach without sublinear algorithms
Sparsification runs in almost-linear time coloring more edges
🔎 Similar Papers
No similar papers found.