🤖 AI Summary
This work addresses the fundamental distortion lower bound of 3 for deterministic voting rules under the metric distortion framework. By introducing limited randomness while preserving transparency and interpretability, the authors break this barrier: a deterministic rule first selects a constant-sized candidate set, from which the winner is chosen uniformly at random. Leveraging novel structural insights based on maximal and stable lotteries, combined with approximation techniques in metric spaces, the paper establishes—for the first time—that bounded randomness of constant size suffices to achieve distortion strictly below 3. The proposed voting rule attains an upper bound of \(3 - \varepsilon\) for distortion, where \(\varepsilon > 0\) is an absolute constant, thereby achieving a favorable trade-off between performance and interpretability.
📝 Abstract
We study the design of voting rules in the metric distortion framework. It is known that any deterministic rule suffers distortion of at least $3$, and that randomized rules can achieve distortion strictly less than $3$, often at the cost of reduced transparency and interpretability. In this work, we explore the trade-off between these paradigms by asking whether it is possible to break the distortion barrier of $3$ using only"bounded"randomness. We answer in the affirmative by presenting a voting rule that (1) achieves distortion of at most $3 - \varepsilon$ for some absolute constant $\varepsilon>0$, and (2) selects a winner uniformly at random from a deterministically identified list of constant size. Our analysis builds on new structural results for the distortion and approximation of Maximal Lotteries and Stable Lotteries.