Robust Dynamic Material Handling via Adaptive Constrained Evolutionary Reinforcement Learning

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the robust scheduling challenge in Dynamic Material Handling (DMH)—characterized by dynamic task arrivals, limited vehicle resources, hard deadline constraints, and sparse rewards—this paper proposes the Adaptive Constrained Evolutionary Reinforcement Learning (ACERL) framework. ACERL innovatively integrates multi-strategy population co-evolution, constraint-driven action pruning, and intelligent historical instance sampling to mitigate reward sparsity and enhance policy generalization. Evaluated on eight benchmark instances, ACERL strictly satisfies all deadline constraints while significantly reducing makespan and total tardiness. Under 40 noisy perturbation scenarios, it demonstrates strong robustness. Cross-validation and ablation studies confirm synergistic contributions of its components. ACERL establishes a scalable, interpretable paradigm for real-time, constraint-aware dynamic scheduling.

Technology Category

Application Category

📝 Abstract
Dynamic material handling (DMH) involves the assignment of dynamically arriving material transporting tasks to suitable vehicles in real time for minimising makespan and tardiness. In real-world scenarios, historical task records are usually available, which enables the training of a decision policy on multiple instances consisting of historical records. Recently, reinforcement learning has been applied to solve DMH. Due to the occurrence of dynamic events such as new tasks, adaptability is highly required. Solving DMH is challenging since constraints including task delay should be satisfied. A feedback is received only when all tasks are served, which leads to sparse reward. Besides, making the best use of limited computational resources and historical records for training a robust policy is crucial. The time allocated to different problem instances would highly impact the learning process. To tackle those challenges, this paper proposes a novel adaptive constrained evolutionary reinforcement learning (ACERL) approach, which maintains a population of actors for diverse exploration. ACERL accesses each actor for tackling sparse rewards and constraint violation to restrict the behaviour of the policy. Moreover, ACERL adaptively selects the most beneficial training instances for improving the policy. Extensive experiments on eight training and eight unseen test instances demonstrate the outstanding performance of ACERL compared with several state-of-the-art algorithms. Policies trained by ACERL can schedule the vehicles while fully satisfying the constraints. Additional experiments on 40 unseen noised instances show the robust performance of ACERL. Cross-validation further presents the overall effectiveness of ACREL. Besides, a rigorous ablation study highlights the coordination and benefits of each ingredient of ACERL.
Problem

Research questions and friction points this paper is trying to address.

Minimizing makespan and tardiness in dynamic material handling tasks
Addressing sparse rewards and constraint violations in reinforcement learning
Optimizing computational resources and historical data for robust policy training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive constrained evolutionary reinforcement learning
Population of actors for diverse exploration
Adaptive selection of training instances
🔎 Similar Papers
No similar papers found.