Zero-shot Generalization in Inventory Management: Train, then Estimate and Decide

📅 2024-11-01
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In real-world inventory management, dynamic uncertainty in demand and lead-time distributions severely limits the generalization capability of existing deep reinforcement learning (DRL) methods. To address this, we propose a novel “Train → Estimate → Decide” three-stage framework and introduce GC-LSN—the first zero-shot generalizable inventory agent for lost-sales settings with periodic demand and stochastic lead times. Our approach innovatively integrates a Super-MDP formulation with the Time-Evolving Distribution (TED) framework, combining nonparametric Kaplan–Meier distribution estimation, online parametric identification, and policy self-adaptation. Experiments demonstrate that GC-LSN significantly outperforms classical heuristics under known parameters and surpasses state-of-the-art online learning methods with worst-case guarantees under unknown demand and lead-time distributions. Crucially, it enables real-time, robust decision-making across unseen distributional shifts—achieving true zero-shot generalization in complex, non-stationary inventory environments.

Technology Category

Application Category

📝 Abstract
Deploying deep reinforcement learning (DRL) in real-world inventory management presents challenges, including dynamic environments and uncertain problem parameters, e.g. demand and lead time distributions. These challenges highlight a research gap, suggesting a need for a unifying framework to model and solve sequential decision-making under parameter uncertainty. We address this by exploring an underexplored area of DRL for inventory management: training generally capable agents (GCAs) under zero-shot generalization (ZSG). Here, GCAs are advanced DRL policies designed to handle a broad range of sampled problem instances with diverse inventory challenges. ZSG refers to the ability to successfully apply learned policies to unseen instances with unknown parameters without retraining. We propose a unifying Super-Markov Decision Process formulation and the Train, then Estimate and Decide (TED) framework to train and deploy a GCA tailored to inventory management applications. The TED framework consists of three phases: training a GCA on varied problem instances, continuously estimating problem parameters during deployment, and making decisions based on these estimates. Applied to periodic review inventory problems with lost sales, cyclic demand patterns, and stochastic lead times, our trained agent, the Generally Capable Lost Sales Network (GC-LSN) consistently outperforms well-known traditional policies when problem parameters are known. Moreover, under conditions where demand and/or lead time distributions are initially unknown and must be estimated, we benchmark against online learning methods that provide worst-case performance guarantees. Our GC-LSN policy, paired with the Kaplan-Meier estimator, is demonstrated to complement these methods by providing superior empirical performance.
Problem

Research questions and friction points this paper is trying to address.

Addressing dynamic environments and uncertain parameters in inventory management
Developing zero-shot generalization for unseen inventory problem instances
Creating a unifying framework for sequential decision-making under uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Train Generally Capable Agents using DRL
Propose Super-Markov Decision Process formulation
Combine parameter estimation with decision framework
🔎 Similar Papers
No similar papers found.
T
Tarkan Temizöz
Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, PO Box 513, Eindhoven 5600 MB, Netherlands
Christina Imdahl
Christina Imdahl
Eindhoven University of Technology
R
R. Dijkman
Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, PO Box 513, Eindhoven 5600 MB, Netherlands
D
Douniel Lamghari-Idrissi
Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, PO Box 513, Eindhoven 5600 MB, Netherlands; ASML US LLC, 2625 W Geronimo Pl, Chandler, Arizona 85224, USA
Willem van Jaarsveld
Willem van Jaarsveld
Associate Professor of Operations Research, Eindhoven University of Technology
Stochastic operations managementSupply Chain ManagementDeep Reinforcement Learning