π€ AI Summary
This work addresses catastrophic forgetting in neural solvers for vehicle routing problems under continual task drift, where limited training resources hinder sustained learning. To tackle this challenge, we propose DREE (Dual Replay with Experience Enhancement), the first lifelong learning framework tailored to such dynamic settings. DREE integrates a dual replay mechanism with experience enhancement to jointly improve learning efficiency on new tasks, retention of previously acquired knowledge, and generalization to unseen tasksβall under constrained training budgets. The framework is designed to be seamlessly compatible with a variety of existing neural solvers. Extensive experiments demonstrate that DREE significantly outperforms baseline approaches, effectively mitigating catastrophic forgetting while enhancing overall adaptability in evolving problem landscapes.
π Abstract
Existing neural solvers for vehicle routing problems (VRPs) are typically trained either in a one-off manner on a fixed set of pre-defined tasks or in a lifelong manner on several tasks arriving sequentially, assuming sufficient training on each task. Both settings overlook a common real-world property: problem patterns may drift continually over time, yielding massive tasks sequentially arising while offering only limited training resources per task. In this paper, we study a novel lifelong learning paradigm for neural VRP solvers under continually drifting tasks over learning time steps, where sufficient training for any given task at any time is not available. We propose Dual Replay with Experience Enhancement (DREE), a general framework to improve learning efficiency and mitigate catastrophic forgetting under such drift. Extensive experiments show that, under such continual drift, DREE effectively learns new tasks, preserves prior knowledge, improves generalization to unseen tasks, and can be applied to diverse existing neural solvers.