๐ค AI Summary
This paper studies optimal contract design for single-parameter agents (whose types are characterized solely by unit effort cost) within a Bayesian framework. Addressing the additive approximation challenge under one-dimensional uncertainty, it introduces the first additive polynomial-time approximation scheme (PTAS) for single-dimensional contract design and rigorously refutes the existence of an additive fully polynomial-time approximation scheme (FPTAS), thereby characterizing its intrinsic computational complexity. In contrast to multi-dimensional settings, this model admits fundamental advantages in both algorithmic tractability and learning efficiency. Integrating computational game-theoretic modeling, online learning theory, and sample complexity analysis, the paper develops an efficient learning mechanism for optimal contracts, establishing tight regret bounds and sample complexity upper bounds. Crucially, it proves the irreducibility of additive approximation errorโyielding the first complete tight characterization of additive approximability for single-dimensional Bayesian contracts.
๐ Abstract
We study a Bayesian contract design problem in which a principal interacts with an unknown agent. We consider the single-parameter uncertainty model introduced by Alon et al. [2021], in which the agent's type is described by a single parameter, i.e., the cost per unit-of-effort. Despite its simplicity, several works have shown that single-dimensional contract design is not necessarily easier than its multi-dimensional counterpart in many respects. Perhaps the most surprising result is the reduction by Castiglioni et al . [2025] from multi- to single-dimensional contract design. However, their reduction preserves only multiplicative approximations, leaving open the question of whether additive approximations are easier to obtain than multiplicative ones. In this paper, we answer this question--to some extent--positively. In particular, we provide an additive PTAS for these problems while also ruling out the existence of an additive FPTAS. This, in turn, implies that no reduction from multi- to single-dimensional contracts can preserve additive approximations. Moreover, we show that single-dimensional contract design is fundamentally easier than its multi-dimensional counterpart from a learning perspective. Under mild assumptions, we show that optimal contracts can be learned efficiently, providing results on both regret and sample complexity.