🤖 AI Summary
Addressing the robust scheduling challenge in Dynamic Material Handling (DMH)—characterized by dynamic task arrivals, limited vehicle resources, hard deadline constraints, and sparse rewards—this paper proposes the Adaptive Constrained Evolutionary Reinforcement Learning (ACERL) framework. ACERL innovatively integrates multi-strategy population co-evolution, constraint-driven action pruning, and intelligent historical instance sampling to mitigate reward sparsity and enhance policy generalization. Evaluated on eight benchmark instances, ACERL strictly satisfies all deadline constraints while significantly reducing makespan and total tardiness. Under 40 noisy perturbation scenarios, it demonstrates strong robustness. Cross-validation and ablation studies confirm synergistic contributions of its components. ACERL establishes a scalable, interpretable paradigm for real-time, constraint-aware dynamic scheduling.
📝 Abstract
Dynamic material handling (DMH) involves the assignment of dynamically arriving material transporting tasks to suitable vehicles in real time for minimising makespan and tardiness. In real-world scenarios, historical task records are usually available, which enables the training of a decision policy on multiple instances consisting of historical records. Recently, reinforcement learning has been applied to solve DMH. Due to the occurrence of dynamic events such as new tasks, adaptability is highly required. Solving DMH is challenging since constraints including task delay should be satisfied. A feedback is received only when all tasks are served, which leads to sparse reward. Besides, making the best use of limited computational resources and historical records for training a robust policy is crucial. The time allocated to different problem instances would highly impact the learning process. To tackle those challenges, this paper proposes a novel adaptive constrained evolutionary reinforcement learning (ACERL) approach, which maintains a population of actors for diverse exploration. ACERL accesses each actor for tackling sparse rewards and constraint violation to restrict the behaviour of the policy. Moreover, ACERL adaptively selects the most beneficial training instances for improving the policy. Extensive experiments on eight training and eight unseen test instances demonstrate the outstanding performance of ACERL compared with several state-of-the-art algorithms. Policies trained by ACERL can schedule the vehicles while fully satisfying the constraints. Additional experiments on 40 unseen noised instances show the robust performance of ACERL. Cross-validation further presents the overall effectiveness of ACREL. Besides, a rigorous ablation study highlights the coordination and benefits of each ingredient of ACERL.