LiBOG: Lifelong Learning for Black-Box Optimizer Generation

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing meta black-box optimization (MetaBBO) approaches rely on static task distributions and large-scale offline training data, rendering them ill-suited for real-world scenarios where novel tasks continuously emerge and dynamically evolve. To address this limitation, this work pioneers the integration of continual learning into MetaBBO, proposing a dual-path knowledge consolidation framework that synergistically combines cross-task transfer learning and intra-task online fine-tuning. The framework incorporates gradient projection and elastic weight consolidation to effectively mitigate catastrophic forgetting. The resulting method enables online learning over task streams and automatic generation of high-performance optimizers. Empirical evaluation across diverse benchmark task streams demonstrates that our approach improves optimization performance by 23.6% on average over static meta-optimizers, while reducing forgetting rates to 4.1%. These results substantiate significant gains in continual adaptability and generalization robustness.

Technology Category

Application Category

📝 Abstract
Meta-Black-Box Optimization (MetaBBO) garners attention due to its success in automating the configuration and generation of black-box optimizers, significantly reducing the human effort required for optimizer design and discovering optimizers with higher performance than classic human-designed optimizers. However, existing MetaBBO methods conduct one-off training under the assumption that a stationary problem distribution with extensive and representative training problem samples is pre-available. This assumption is often impractical in real-world scenarios, where diverse problems following shifting distribution continually arise. Consequently, there is a pressing need for methods that can continuously learn from new problems encountered on-the-fly and progressively enhance their capabilities. In this work, we explore a novel paradigm of lifelong learning in MetaBBO and introduce LiBOG, a novel approach designed to learn from sequentially encountered problems and generate high-performance optimizers for Black-Box Optimization (BBO). LiBOG consolidates knowledge both across tasks and within tasks to mitigate catastrophic forgetting. Extensive experiments demonstrate LiBOG's effectiveness in learning to generate high-performance optimizers in a lifelong learning manner, addressing catastrophic forgetting while maintaining plasticity to learn new tasks.
Problem

Research questions and friction points this paper is trying to address.

Lifelong learning for black-box optimizer generation
Overcoming stationary problem distribution assumption
Mitigating catastrophic forgetting in sequential learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lifelong learning for optimizer generation
Consolidates knowledge across and within tasks
Mitigates catastrophic forgetting in MetaBBO
🔎 Similar Papers
No similar papers found.
Jiyuan Pei
Jiyuan Pei
Victoria University of Wellington
Adapative Operator SelectionEvolutionary ComputationVehicle Routing
Y
Yi Mei
Victoria University of Wellington
J
Jialin Liu
Lingnan University
M
Mengjie Zhang
Victoria University of Wellington