π€ AI Summary
This paper addresses the auto-bidding problem in multi-slot, second-price real-time advertising auctions, formulating it as an optimization task to maximize clicks/conversions under dual constraints: budget and cost-per-acquisition (CPA). We propose an Oracle Imitation Learning paradigm: the posterior-optimal bid is modeled as a nonlinearly constrained multiple-choice knapsack problem (MCKP), whose tractable βoracleβ solution serves as supervision for training a lightweight, real-time neural network. Our method integrates constrained optimization, supervised imitation learning, and dynamic feature modeling. Experiments demonstrate substantial improvements over state-of-the-art online and offline reinforcement learning baselines, achieving significantly higher sample efficiency. Crucially, the training bottleneck shifts from policy algorithm design to efficient constrained optimization solving. The framework establishes a novel, scalable paradigm for industrial-grade real-time bidding systems.
π Abstract
Online advertising has become one of the most successful business models of the internet era. Impression opportunities are typically allocated through real-time auctions, where advertisers bid to secure advertisement slots. Deciding the best bid for an impression opportunity is challenging, due to the stochastic nature of user behavior and the variability of advertisement traffic over time. In this work, we propose a framework for training auto-bidding agents in multi-slot second-price auctions to maximize acquisitions (e.g., clicks, conversions) while adhering to budget and cost-per-acquisition (CPA) constraints. We exploit the insight that, after an advertisement campaign concludes, determining the optimal bids for each impression opportunity can be framed as a multiple-choice knapsack problem (MCKP) with a nonlinear objective. We propose an"oracle"algorithm that identifies a near-optimal combination of impression opportunities and advertisement slots, considering both past and future advertisement traffic data. This oracle solution serves as a training target for a student network which bids having access only to real-time information, a method we term Oracle Imitation Learning (OIL). Through numerical experiments, we demonstrate that OIL achieves superior performance compared to both online and offline reinforcement learning algorithms, offering improved sample efficiency. Notably, OIL shifts the complexity of training auto-bidding agents from crafting sophisticated learning algorithms to solving a nonlinear constrained optimization problem efficiently.