🤖 AI Summary
Existing meta black-box optimization (MetaBBO) approaches rely on static task distributions and large-scale offline training data, rendering them ill-suited for real-world scenarios where novel tasks continuously emerge and dynamically evolve. To address this limitation, this work pioneers the integration of continual learning into MetaBBO, proposing a dual-path knowledge consolidation framework that synergistically combines cross-task transfer learning and intra-task online fine-tuning. The framework incorporates gradient projection and elastic weight consolidation to effectively mitigate catastrophic forgetting. The resulting method enables online learning over task streams and automatic generation of high-performance optimizers. Empirical evaluation across diverse benchmark task streams demonstrates that our approach improves optimization performance by 23.6% on average over static meta-optimizers, while reducing forgetting rates to 4.1%. These results substantiate significant gains in continual adaptability and generalization robustness.
📝 Abstract
Meta-Black-Box Optimization (MetaBBO) garners attention due to its success in automating the configuration and generation of black-box optimizers, significantly reducing the human effort required for optimizer design and discovering optimizers with higher performance than classic human-designed optimizers. However, existing MetaBBO methods conduct one-off training under the assumption that a stationary problem distribution with extensive and representative training problem samples is pre-available. This assumption is often impractical in real-world scenarios, where diverse problems following shifting distribution continually arise. Consequently, there is a pressing need for methods that can continuously learn from new problems encountered on-the-fly and progressively enhance their capabilities. In this work, we explore a novel paradigm of lifelong learning in MetaBBO and introduce LiBOG, a novel approach designed to learn from sequentially encountered problems and generate high-performance optimizers for Black-Box Optimization (BBO). LiBOG consolidates knowledge both across tasks and within tasks to mitigate catastrophic forgetting. Extensive experiments demonstrate LiBOG's effectiveness in learning to generate high-performance optimizers in a lifelong learning manner, addressing catastrophic forgetting while maintaining plasticity to learn new tasks.