๐ค AI Summary
This work addresses the clustered pickup and delivery problem (PDP), where tightly coupled nodes, precedence constraints, and high inference latency in conventional deep reinforcement learning (DRL) methods pose significant challenges. To tackle these issues, we propose the Cluster-Aware Attention-based Deep Reinforcement Learning (CAADRL) framework, which uniquely incorporates clustering structure as an inductive bias. CAADRL features a Transformer-based cluster-aware encoder and a dynamic dual-decoder architecture that jointly leverages global and intra-cluster attention mechanisms. It further integrates a learnable gating mechanism and employs a POMO-style multi-trajectory policy gradient training strategy. Experimental results demonstrate that CAADRL achieves or surpasses state-of-the-art performance on both synthetic clustered and uniformly distributed PDP benchmarks, delivering substantially improved solution quality and significantly reduced inference timeโespecially on large-scale clustered instances.
๐ Abstract
The Pickup and Delivery Problem (PDP) is a fundamental and challenging variant of the Vehicle Routing Problem, characterized by tightly coupled pickup--delivery pairs, precedence constraints, and spatial layouts that often exhibit clustering. Existing deep reinforcement learning (DRL) approaches either model all nodes on a flat graph, relying on implicit learning to enforce constraints, or achieve strong performance through inference-time collaborative search at the cost of substantial latency. In this paper, we propose \emph{CAADRL} (Cluster-Aware Attention-based Deep Reinforcement Learning), a DRL framework that explicitly exploits the multi-scale structure of PDP instances via cluster-aware encoding and hierarchical decoding. The encoder builds on a Transformer and combines global self-attention with intra-cluster attention over depot, pickup, and delivery nodes, producing embeddings that are both globally informative and locally role-aware. Based on these embeddings, we introduce a Dynamic Dual-Decoder with a learnable gate that balances intra-cluster routing and inter-cluster transitions at each step. The policy is trained end-to-end with a POMO-style policy gradient scheme using multiple symmetric rollouts per instance. Experiments on synthetic clustered and uniform PDP benchmarks show that CAADRL matches or improves upon strong state-of-the-art baselines on clustered instances and remains highly competitive on uniform instances, particularly as problem size increases. Crucially, our method achieves these results with substantially lower inference time than neural collaborative-search baselines, suggesting that explicitly modeling cluster structure provides an effective and efficient inductive bias for neural PDP solvers.