🤖 AI Summary
This paper studies regret minimization for online Pandora’s Box and Prophet Inequality problems under semi-bandit feedback. In the Pandora’s Box setting, a learner sequentially opens boxes with unknown reward distributions per round, incurring opening costs and observing realized rewards; the objective is to maximize net reward—i.e., the maximum observed reward minus total opening costs. We extend this to a contextual linear model where the expected reward is a linear function of a $d$-dimensional time-varying context, and the noise distribution is unknown but stationary. We establish the first optimal $ ilde{O}(sqrt{nT})$ regret bound for the non-contextual case—matching the information-theoretic lower bound—and propose the first contextual algorithm that jointly learns both the linear reward parameters and the unknown noise distribution, achieving $ ilde{O}(ndsqrt{T})$ regret. Our technical framework is generalizable to the Prophet Inequality problem, yielding regret improvements of the same asymptotic order.
📝 Abstract
We study the Pandora's Box problem in an online learning setting with semi-bandit feedback. In each round, the learner sequentially pays to open up to $n$ boxes with unknown reward distributions, observes rewards upon opening, and decides when to stop. The utility of the learner is the maximum observed reward minus the cumulative cost of opened boxes, and the goal is to minimize regret defined as the gap between the cumulative expected utility and that of the optimal policy. We propose a new algorithm that achieves $widetilde{O}(sqrt{nT})$ regret after $T$ rounds, which improves the $widetilde{O}(nsqrt{T})$ bound of Agarwal et al. [2024] and matches the known lower bound up to logarithmic factors. To better capture real-life applications, we then extend our results to a natural but challenging contextual linear setting, where each box's expected reward is linear in some known but time-varying $d$-dimensional context and the noise distribution is fixed over time. We design an algorithm that learns both the linear function and the noise distributions, achieving $widetilde{O}(ndsqrt{T})$ regret. Finally, we show that our techniques also apply to the online Prophet Inequality problem, where the learner must decide immediately whether or not to accept a revealed reward. In both non-contextual and contextual settings, our approach achieves similar improvements and regret bounds.