🤖 AI Summary
To address the lack of a unified, efficient, and scalable benchmarking platform for Meta Black-Box Optimization (MetaBBO), this paper introduces MetaBox 2.0—the first open-source, comprehensive metaBBO benchmark supporting single- and multi-objective, multi-model, and multi-task scenarios. Its core contributions are fourfold: (1) a unified modular architecture compatible with reinforcement learning, evolutionary algorithms, and gradient-based meta-optimizers; (2) distributed parallel training acceleration, achieving 10–40× speedup; (3) a broad benchmark suite covering 18 problem categories and 1,900+ tasks; and (4) a highly extensible analysis toolkit with plug-and-play APIs. MetaBox 2.0 fully reproduces 23 state-of-the-art baselines and systematically demonstrates significant improvements in optimization performance, cross-task generalization, and learning efficiency. It provides researchers and practitioners with a standardized evaluation framework and rapid experimental infrastructure for advancing MetaBBO research.
📝 Abstract
Meta-Black-Box Optimization (MetaBBO) streamlines the automation of optimization algorithm design through meta-learning. It typically employs a bi-level structure: the meta-level policy undergoes meta-training to reduce the manual effort required in developing algorithms for low-level optimization tasks. The original MetaBox (2023) provided the first open-source framework for reinforcement learning-based single-objective MetaBBO. However, its relatively narrow scope no longer keep pace with the swift advancement in this field. In this paper, we introduce MetaBox-v2 (https://github.com/MetaEvo/MetaBox) as a milestone upgrade with four novel features: 1) a unified architecture supporting RL, evolutionary, and gradient-based approaches, by which we reproduce 23 up-to-date baselines; 2) efficient parallelization schemes, which reduce the training/testing time by 10-40x; 3) a comprehensive benchmark suite of 18 synthetic/realistic tasks (1900+ instances) spanning single-objective, multi-objective, multi-model, and multi-task optimization scenarios; 4) plentiful and extensible interfaces for custom analysis/visualization and integrating to external optimization tools/benchmarks. To show the utility of MetaBox-v2, we carry out a systematic case study that evaluates the built-in baselines in terms of the optimization performance, generalization ability and learning efficiency. Valuable insights are concluded from thorough and detailed analysis for practitioners and those new to the field.