🤖 AI Summary
To address poor generalization and slow adaptation to distribution shifts in online 3D bin packing (3D-BPP) under deep reinforcement learning, this paper proposes the “Adaptive Selection After Pruning” (ASAP) framework. ASAP decouples decision-making into two stages: dynamic action-space pruning followed by a lightweight selection policy, and incorporates meta-learning (MAML-style) for cross-distribution initialization. The selection policy enables second-level fine-tuning—requiring only a few gradient steps—significantly enhancing out-of-distribution adaptability. The method further integrates dual-policy co-training and joint modeling of discrete and continuous actions. Experiments demonstrate that ASAP consistently surpasses state-of-the-art methods both in-distribution and out-of-distribution, achieving higher packing utilization, superior robustness, and up to 47% improvement in generalization performance.
📝 Abstract
Recently, deep reinforcement learning (DRL) has achieved promising results in solving online 3D Bin Packing Problems (3D-BPP). However, these DRL-based policies may perform poorly on new instances due to distribution shift. Besides generalization, we also consider adaptation, completely overlooked by previous work, which aims at rapidly finetuning these policies to a new test distribution. To tackle both generalization and adaptation issues, we propose Adaptive Selection After Pruning (ASAP), which decomposes a solver's decision-making into two policies, one for pruning and one for selection. The role of the pruning policy is to remove inherently bad actions, which allows the selection policy to choose among the remaining most valuable actions. To learn these policies, we propose a training scheme based on a meta-learning phase of both policies followed by a finetuning phase of the sole selection policy to rapidly adapt it to a test distribution. Our experiments demonstrate that ASAP exhibits excellent generalization and adaptation capabilities on in-distribution and out-of-distribution instances under both discrete and continuous setup.