Off-Policy Learning with Limited Supply

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in contextual bandits where limited item availability—such as inventory or coupons—causes conventional off-policy learning to fail, as greedy selection of high-reward items depletes resources prematurely and undermines cumulative returns. The paper provides the first theoretical characterization of this failure mechanism and introduces a novel algorithm, OPLS, which allocates scarce resources based on relative expected rewards, prioritizing users for whom an item yields higher advantage compared to others. By doing so, OPLS achieves superior global resource allocation without relying on the unconstrained assumptions inherent in existing methods. Empirical evaluations on both synthetic and real-world datasets demonstrate that OPLS significantly outperforms current off-policy learning approaches in terms of cumulative reward.

Technology Category

Application Category

📝 Abstract
We study off-policy learning (OPL) in contextual bandits, which plays a key role in a wide range of real-world applications such as recommendation systems and online advertising. Typical OPL in contextual bandits assumes an unconstrained environment where a policy can select the same item infinitely. However, in many practical applications, including coupon allocation and e-commerce, limited supply constrains items through budget limits on distributed coupons or inventory restrictions on products. In these settings, greedily selecting the item with the highest expected reward for the current user may lead to early depletion of that item, making it unavailable for future users who could potentially generate higher expected rewards. As a result, OPL methods that are optimal in unconstrained settings may become suboptimal in limited supply settings. To address the issue, we provide a theoretical analysis showing that conventional greedy OPL approaches may fail to maximize the policy performance, and demonstrate that policies with superior performance must exist in limited supply settings. Based on this insight, we introduce a novel method called Off-Policy learning with Limited Supply (OPLS). Rather than simply selecting the item with the highest expected reward, OPLS focuses on items with relatively higher expected rewards compared to the other users, enabling more efficient allocation of items with limited supply. Our empirical results on both synthetic and real-world datasets show that OPLS outperforms existing OPL methods in contextual bandit problems with limited supply.
Problem

Research questions and friction points this paper is trying to address.

off-policy learning
limited supply
contextual bandits
budget constraints
inventory restrictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

off-policy learning
limited supply
contextual bandits
resource allocation
OPLS
🔎 Similar Papers
No similar papers found.