MetaBox-v2: A Unified Benchmark Platform for Meta-Black-Box Optimization

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of a unified, efficient, and scalable benchmarking platform for Meta Black-Box Optimization (MetaBBO), this paper introduces MetaBox 2.0—the first open-source, comprehensive metaBBO benchmark supporting single- and multi-objective, multi-model, and multi-task scenarios. Its core contributions are fourfold: (1) a unified modular architecture compatible with reinforcement learning, evolutionary algorithms, and gradient-based meta-optimizers; (2) distributed parallel training acceleration, achieving 10–40× speedup; (3) a broad benchmark suite covering 18 problem categories and 1,900+ tasks; and (4) a highly extensible analysis toolkit with plug-and-play APIs. MetaBox 2.0 fully reproduces 23 state-of-the-art baselines and systematically demonstrates significant improvements in optimization performance, cross-task generalization, and learning efficiency. It provides researchers and practitioners with a standardized evaluation framework and rapid experimental infrastructure for advancing MetaBBO research.

Technology Category

Application Category

📝 Abstract
Meta-Black-Box Optimization (MetaBBO) streamlines the automation of optimization algorithm design through meta-learning. It typically employs a bi-level structure: the meta-level policy undergoes meta-training to reduce the manual effort required in developing algorithms for low-level optimization tasks. The original MetaBox (2023) provided the first open-source framework for reinforcement learning-based single-objective MetaBBO. However, its relatively narrow scope no longer keep pace with the swift advancement in this field. In this paper, we introduce MetaBox-v2 (https://github.com/MetaEvo/MetaBox) as a milestone upgrade with four novel features: 1) a unified architecture supporting RL, evolutionary, and gradient-based approaches, by which we reproduce 23 up-to-date baselines; 2) efficient parallelization schemes, which reduce the training/testing time by 10-40x; 3) a comprehensive benchmark suite of 18 synthetic/realistic tasks (1900+ instances) spanning single-objective, multi-objective, multi-model, and multi-task optimization scenarios; 4) plentiful and extensible interfaces for custom analysis/visualization and integrating to external optimization tools/benchmarks. To show the utility of MetaBox-v2, we carry out a systematic case study that evaluates the built-in baselines in terms of the optimization performance, generalization ability and learning efficiency. Valuable insights are concluded from thorough and detailed analysis for practitioners and those new to the field.
Problem

Research questions and friction points this paper is trying to address.

Expands MetaBBO framework to support diverse optimization methods
Enhances efficiency with faster parallel training and testing
Provides extensive benchmarks for multi-scenario optimization tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified architecture supports RL, evolutionary, gradient-based methods
Efficient parallelization reduces training/testing time significantly
Comprehensive benchmark suite with 18 diverse tasks
🔎 Similar Papers
No similar papers found.
Zeyuan Ma
Zeyuan Ma
South China University of Technology
Meta-Black-Box OptimizationReinforcement LearningLearning to Optimize
Y
Yue-Jiao Gong
South China University of Technology
H
Hongshu Guo
South China University of Technology
Wenjie Qiu
Wenjie Qiu
South China University of Technology
Large scale global optimization、Black box optimization、Evolutionary computation
S
Sijie Ma
South China University of Technology
H
Hongqiao Lian
South China University of Technology
K
Kaixu Chen
South China University of Technology
C
Chen Wang
South China University of Technology
Z
Zhiyang Huang
South China University of Technology
Z
Zechuan Huang
South China University of Technology
G
Guojun Peng
South China University of Technology
R
Ran Cheng
Hong Kong Polytechnic University
Yining Ma
Yining Ma
Postdoctoral Associate, MIT
Machine LearningOptimizationLearning to OptimizeNeural Combinatorial Optimization