🤖 AI Summary
This paper studies fair matching under ordinal preferences in metric spaces: $n$ agents and $n$ items reside in a common metric space, and only the agents’ ordinal rankings of items by distance (not exact distances) are known. The objective is to design a mechanism minimizing the *maximum individual matching cost*—a fairness-oriented criterion contrasting with the conventional sum-cost minimization. We propose RepMatch, a novel mechanism that reduces the metric distortion for the max-cost objective from $O(n^2)$ to $O(n^{1.58})$. Moreover, we establish a tight $O(n^2)$ upper bound on distortion under any monotone symmetric norm—yielding the first provably tight theoretical guarantee for such fairness-aware objectives. Our work unifies ordinal mechanism design, fairness modeling, and metric distortion analysis, providing both a new theoretical framework and an efficient algorithm for high-fairness matching.
📝 Abstract
We consider the matching problem in the metric distortion framework. There are $n$ agents and $n$ items occupying points in a shared metric space, and the goal is to design a matching mechanism that outputs a low-cost matching between the agents and items, using only agents' ordinal rankings of the candidates by distance. A mechanism has distortion $α$ if it always outputs a matching whose cost is within a factor of $α$ of the optimum, in every instance regardless of the metric space.
Typically, the cost of a matching is measured in terms of the total distance between matched agents and items, but this measure can incentivize unfair outcomes where a handful of agents bear the brunt of the cost. With this in mind, we consider how the metric distortion problem changes when the cost is instead measured in terms of the maximum cost of any agent. We show that while these two notions of distortion can in general differ by a factor of $n$, the distortion of a variant of the state-of-the-art mechanism, RepMatch, actually improves from $O(n^2)$ under the sum objective to $O(n^{1.58})$ under the max objective. We also show that for any fairness objective defined by a monotone symmetric norm, this algorithm guarantees distortion $O(n^2)$.