🤖 AI Summary
This paper addresses the exponential computational complexity inherent in synthesizing optimal linear mechanisms for automated mechanism design—specifically, mechanisms satisfying efficiency, incentive compatibility, expected strong budget balance (SBB), and individual rationality (IR). To overcome this bottleneck, we formulate the estimation of critical expectation terms as a multi-armed bandit (MAB) problem and propose a novel estimator grounded in PAC learning theory. Our approach reduces the computational complexity of evaluating these key terms from exponential to $O(N log N)$ and yields analytically tractable, tight closed-form solutions. Theoretically, the estimator enjoys PAC convergence guarantees with respect to estimation error and sample complexity. Numerical experiments demonstrate scalability to 128 agents—substantially exceeding the capacity of prior methods—while strictly enforcing all economic constraints. This enables efficient, constraint-compliant mechanism synthesis at unprecedented scale.
📝 Abstract
We analytically derive a class of optimal solutions to a linear program (LP) for automated mechanism design that satisfies efficiency, incentive compatibility, strong budget balance (SBB), and individual rationality (IR), where SBB and IR are enforced in expectation. These solutions can be expressed using a set of essential variables whose cardinality is exponentially smaller than the total number of variables in the original formulation. However, evaluating a key term in the solutions requires exponentially many optimization steps as the number of players $N$ increases. We address this by translating the evaluation of this term into a multi-armed bandit (MAB) problem and develop a probably approximately correct (PAC) estimator with asymptotically optimal sample complexity. This MAB-based approach reduces the optimization complexity from exponential to $O(Nlog N)$. Numerical experiments confirm that our method efficiently computes mechanisms with the target properties, scaling to problems with up to $N=128$ players -- substantially improving over prior work.