Tight Inapproximability for Welfare-Maximizing Autobidding Equilibria

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the computational complexity of computing equilibria that maximize welfare and revenue in second-price auctions with automated bidding under return-on-investment constraints. Leveraging a reduction based on the Projection Games Conjecture, it establishes the first tight inapproximability bounds: achieving a welfare approximation factor better than \(2 - \varepsilon\) is NP-hard, and revenue exhibits logarithmic inapproximability. These hardness results persist even when incorporating value predictions or restricting to simple learning algorithms, preserving constant-factor inapproximability. The findings imply that determining whether any nontrivial equilibrium exists that improves upon worst-case guarantees is NP-hard. Moreover, the study tightly connects these inapproximability results to the price of anarchy, thereby characterizing the fundamental theoretical limits of computing such equilibria.

Technology Category

Application Category

📝 Abstract
We examine the complexity of computing welfare- and revenue-maximizing equilibria in autobidding second-price auctions subject to return-on-spend (RoS) constraints. We show that computing an autobidding equilibrium that approximates the welfare-optimal one within a factor of $2 - \epsilon$ is NP-hard for any constant $\epsilon>0$. Moreover, deciding whether there exists an autobidding equilibrium that attains a $1/2 + \epsilon$ fraction of the optimal welfare -- unfettered by equilibrium constraints -- is NP-hard for any constant $\epsilon>0$. This hardness result is tight in view of the fact that the price of anarchy (PoA) is at most $2$, and shows that deciding whether a non-trivial autobidding equilibrium exists -- one that is even marginally better than the worst-case guarantee -- is intractable. For revenue, we establish a stronger logarithmic inapproximability, while under the projection games conjecture, our reduction rules out even a polynomial approximation factor. These results significantly strengthen the APX-hardness of Li and Tang (AAAI'24). Furthermore, we refine our reduction in the presence of ML advice concerning the buyers'valuations, revealing again a close connection between the inapproximability threshold and PoA bounds. Finally, we examine relaxed notions of equilibrium attained by simple learning algorithms, establishing constant inapproximability for both revenue and welfare.
Problem

Research questions and friction points this paper is trying to address.

autobidding
welfare maximization
revenue maximization
inapproximability
second-price auctions
Innovation

Methods, ideas, or system contributions that make the work stand out.

autobidding
inapproximability
price of anarchy
second-price auctions
return-on-spend constraints
🔎 Similar Papers
No similar papers found.