🤖 AI Summary
In real-world inventory management, dynamic uncertainty in demand and lead-time distributions severely limits the generalization capability of existing deep reinforcement learning (DRL) methods. To address this, we propose a novel “Train → Estimate → Decide” three-stage framework and introduce GC-LSN—the first zero-shot generalizable inventory agent for lost-sales settings with periodic demand and stochastic lead times. Our approach innovatively integrates a Super-MDP formulation with the Time-Evolving Distribution (TED) framework, combining nonparametric Kaplan–Meier distribution estimation, online parametric identification, and policy self-adaptation. Experiments demonstrate that GC-LSN significantly outperforms classical heuristics under known parameters and surpasses state-of-the-art online learning methods with worst-case guarantees under unknown demand and lead-time distributions. Crucially, it enables real-time, robust decision-making across unseen distributional shifts—achieving true zero-shot generalization in complex, non-stationary inventory environments.
📝 Abstract
Deploying deep reinforcement learning (DRL) in real-world inventory management presents challenges, including dynamic environments and uncertain problem parameters, e.g. demand and lead time distributions. These challenges highlight a research gap, suggesting a need for a unifying framework to model and solve sequential decision-making under parameter uncertainty. We address this by exploring an underexplored area of DRL for inventory management: training generally capable agents (GCAs) under zero-shot generalization (ZSG). Here, GCAs are advanced DRL policies designed to handle a broad range of sampled problem instances with diverse inventory challenges. ZSG refers to the ability to successfully apply learned policies to unseen instances with unknown parameters without retraining. We propose a unifying Super-Markov Decision Process formulation and the Train, then Estimate and Decide (TED) framework to train and deploy a GCA tailored to inventory management applications. The TED framework consists of three phases: training a GCA on varied problem instances, continuously estimating problem parameters during deployment, and making decisions based on these estimates. Applied to periodic review inventory problems with lost sales, cyclic demand patterns, and stochastic lead times, our trained agent, the Generally Capable Lost Sales Network (GC-LSN) consistently outperforms well-known traditional policies when problem parameters are known. Moreover, under conditions where demand and/or lead time distributions are initially unknown and must be estimated, we benchmark against online learning methods that provide worst-case performance guarantees. Our GC-LSN policy, paired with the Kaplan-Meier estimator, is demonstrated to complement these methods by providing superior empirical performance.