Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In stockout-driven inventory control, reinforcement learning (RL) suffers from low sample efficiency and unobserved true demand due to censored sales data. To address these bottlenecks, this paper proposes a novel RL framework grounded in feedback graph modeling. We introduce, for the first time, a structured feedback graph specifically designed for stockout settings and theoretically prove that it reduces sample complexity. Furthermore, we incorporate an intrinsic motivation reward mechanism to actively guide the agent toward high-information-gain state-action pairs. Our approach integrates RL, causal feedback modeling, and theory-driven sample-efficiency analysis, achieving strong policy performance while drastically reducing required interaction data. Experiments demonstrate a 40–65% reduction in data requirements to attain equivalent performance compared to baseline methods. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has proven to be well-performed and general-purpose in the inventory control (IC). However, further improvement of RL algorithms in the IC domain is impeded due to two limitations of online experience. First, online experience is expensive to acquire in real-world applications. With the low sample efficiency nature of RL algorithms, it would take extensive time to train the RL policy to convergence. Second, online experience may not reflect the true demand due to the lost sales phenomenon typical in IC, which makes the learning process more challenging. To address the above challenges, we propose a decision framework that combines reinforcement learning with feedback graph (RLFG) and intrinsically motivated exploration (IME) to boost sample efficiency. In particular, we first take advantage of the inherent properties of lost-sales IC problems and design the feedback graph (FG) specially for lost-sales IC problems to generate abundant side experiences aid RL updates. Then we conduct a rigorous theoretical analysis of how the designed FG reduces the sample complexity of RL methods. Based on the theoretical insights, we design an intrinsic reward to direct the RL agent to explore to the state-action space with more side experiences, further exploiting FG's power. Experimental results demonstrate that our method greatly improves the sample efficiency of applying RL in IC. Our code is available at https://anonymous.4open.science/r/RLIMFG4IC-811D/
Problem

Research questions and friction points this paper is trying to address.

Improve sample efficiency in inventory control
Address lost sales in demand data
Enhance RL with feedback graph exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning with feedback graph
Intrinsically motivated exploration
Improved sample efficiency in inventory control
🔎 Similar Papers
No similar papers found.
Zifan Liu
Zifan Liu
Adobe
machine learningdeep learningdata management
X
Xinran Li
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology
Shibo Chen
Shibo Chen
Architect, Tenstorrent
ArchitectureNetwork on Chip
G
Gen Li
Department of Statistics, The Chinese University of Hong Kong
Jiashuo Jiang
Jiashuo Jiang
Hong Kong University of Science and Technology
operations researchoperations managementoptimizationapproximation algorithmsmachine learning
J
Jun Zhang
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology