Cluster-Aware Attention-Based Deep Reinforcement Learning for Pickup and Delivery Problems

๐Ÿ“… 2026-03-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the clustered pickup and delivery problem (PDP), where tightly coupled nodes, precedence constraints, and high inference latency in conventional deep reinforcement learning (DRL) methods pose significant challenges. To tackle these issues, we propose the Cluster-Aware Attention-based Deep Reinforcement Learning (CAADRL) framework, which uniquely incorporates clustering structure as an inductive bias. CAADRL features a Transformer-based cluster-aware encoder and a dynamic dual-decoder architecture that jointly leverages global and intra-cluster attention mechanisms. It further integrates a learnable gating mechanism and employs a POMO-style multi-trajectory policy gradient training strategy. Experimental results demonstrate that CAADRL achieves or surpasses state-of-the-art performance on both synthetic clustered and uniformly distributed PDP benchmarks, delivering substantially improved solution quality and significantly reduced inference timeโ€”especially on large-scale clustered instances.

Technology Category

Application Category

๐Ÿ“ Abstract
The Pickup and Delivery Problem (PDP) is a fundamental and challenging variant of the Vehicle Routing Problem, characterized by tightly coupled pickup--delivery pairs, precedence constraints, and spatial layouts that often exhibit clustering. Existing deep reinforcement learning (DRL) approaches either model all nodes on a flat graph, relying on implicit learning to enforce constraints, or achieve strong performance through inference-time collaborative search at the cost of substantial latency. In this paper, we propose \emph{CAADRL} (Cluster-Aware Attention-based Deep Reinforcement Learning), a DRL framework that explicitly exploits the multi-scale structure of PDP instances via cluster-aware encoding and hierarchical decoding. The encoder builds on a Transformer and combines global self-attention with intra-cluster attention over depot, pickup, and delivery nodes, producing embeddings that are both globally informative and locally role-aware. Based on these embeddings, we introduce a Dynamic Dual-Decoder with a learnable gate that balances intra-cluster routing and inter-cluster transitions at each step. The policy is trained end-to-end with a POMO-style policy gradient scheme using multiple symmetric rollouts per instance. Experiments on synthetic clustered and uniform PDP benchmarks show that CAADRL matches or improves upon strong state-of-the-art baselines on clustered instances and remains highly competitive on uniform instances, particularly as problem size increases. Crucially, our method achieves these results with substantially lower inference time than neural collaborative-search baselines, suggesting that explicitly modeling cluster structure provides an effective and efficient inductive bias for neural PDP solvers.
Problem

Research questions and friction points this paper is trying to address.

Pickup and Delivery Problem
clustering
vehicle routing
deep reinforcement learning
inference latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

cluster-aware attention
hierarchical decoding
dynamic dual-decoder
multi-scale structure
inductive bias
๐Ÿ”Ž Similar Papers
W
Wentao Wang
Leicester International Institute, Dalian University of Technology, Panjin, 124221, Liaoning, China.
Lifeng Han
Lifeng Han
Leiden University Medical Centre
Clinical NLPInformation ExtractionMachine TranslationMultiword Expressions
G
Guangyu Zou
Department of Electronic and Information Technology, Dalian University of Technology, Panjin, 124221, Liaoning, China.