🤖 AI Summary
This study addresses an online linear programming (OLP) problem where resources arrive incrementally via an exogenous stochastic replenishment process, initial inventory is negligible, and the total budget is unknown a priori. It provides the first systematic characterization of the optimal regret bounds under such replenishment dynamics. The authors propose adaptive algorithms tailored to the underlying distributional properties: achieving Õ(√T) regret for bounded distributions, O(log T) regret for non-degenerate finite-support distributions—highlighting a fundamental gap relative to degenerate cases—and a novel two-phase “accumulate-and-switch” strategy that attains O(log²T) regret for continuous-support distributions. Empirical evaluations demonstrate that the proposed methods significantly outperform classical OLP algorithms.
📝 Abstract
We study an online linear programming (OLP) model in which inventory is not provided upfront but instead arrives gradually through an exogenous stochastic replenishment process. This replenishment-based formulation captures operational settings, such as e-commerce fulfillment, perishable supply chains, and renewable-powered systems, where resources are accumulated gradually and initial inventories are small or zero. The introduction of dispersed, uncertain replenishment fundamentally alters the structure of classical OLPs, creating persistent stockout risk and eliminating advance knowledge of the total budget. We develop new algorithms and regret analyses for three major distributional regimes studied in the OLP literature: bounded distributions, finite-support distributions, and continuous-support distributions with a non-degeneracy condition. For bounded distributions, we design an algorithm that achieves $\widetilde{\mathcal{O}}(\sqrt{T})$ regret. For finite-support distributions with a non-degenerate induced LP, we obtain $\mathcal{O}(\log T)$ regret, and we establish an $\Omega(\sqrt{T})$ lower bound for degenerate instances, demonstrating a sharp separation from the classical setting where $\mathcal{O}(1)$ regret is achievable. For continuous-support, non-degenerate distributions, we develop a two-stage accumulate-then-convert algorithm that achieves $\mathcal{O}(\log^2 T)$ regret, comparable to the $\mathcal{O}(\log T)$ regret in classical OLPs. Together, these results provide a near-complete characterization of the optimal regret achievable in OLP with replenishment. Finally, we empirically evaluate our algorithms and demonstrate their advantages over natural adaptations of classical OLP methods in the replenishment setting.