🤖 AI Summary
This paper studies the online metric matching problem in the Euclidean metric space $[0,1]^d$: $n$ servers are placed adversarially in advance, and $n$ requests arrive sequentially; each request must be irrevocably matched to an idle server upon arrival, incurring cost equal to the Euclidean distance. Requests are independent but not identically distributed, satisfying only a mild smoothness condition. We present the first $o(log n)$-competitive algorithm for non-i.i.d. inputs in a nontrivial metric space, breaking the $Omega(log n)$ lower bound inherent to classical metric embedding approaches. Key technical innovations include a single-sample learning framework, deterministic metric embedding, smoothness analysis based on the cost of suboptimal solutions, and offline optimal cost lower bounds derived via stochastic majorization theory. For any dimension $d
eq 2$, we design an $O(1)$-competitive algorithm requiring only one sample per distribution—without prior knowledge of the distributions—significantly improving over all existing methods.
📝 Abstract
In the online metric matching problem, $n$ servers and $n$ requests lie in a metric space. Servers are available upfront, and requests arrive sequentially. An arriving request must be matched immediately and irrevocably to an available server, incurring a cost equal to their distance. The goal is to minimize the total matching cost.
We study this problem in the Euclidean metric $[0, 1]^d$, when servers are adversarial and requests are independently drawn from distinct distributions that satisfy a mild smoothness condition. Our main result is an $O(1)$-competitive algorithm for $d
eq 2$ that requires no distributional knowledge, relying only on a single sample from each request distribution. To our knowledge, this is the first algorithm to achieve an $o(log n)$ competitive ratio for non-trivial metrics beyond the i.i.d. setting. Our approach bypasses the $Ω(log n)$ barrier introduced by probabilistic metric embeddings: instead of analyzing the embedding distortion and the algorithm separately, we directly bound the cost of the algorithm on the target metric of a simple deterministic embedding. We then combine this analysis with lower bounds on the offline optimum for Euclidean metrics, derived via majorization arguments, to obtain our guarantees.