Machine Learning Algorithms for Improving Black Box Optimization Solvers

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional derivative-free methods for black-box optimization (BBO) suffer from limited performance in high-dimensional, noisy, and mixed-integer settings. To address this, this paper proposes a machine learning–enhanced optimization framework. Methodologically, it integrates state-of-the-art neural architectures—including neural networks, Transformers, diffusion models, and Mamba—to jointly model optimization dynamics, learn adaptive strategies, and synergize meta-optimization, operator auto-configuration, and generative modeling. We systematically evaluate twelve ML-augmented algorithms—NNs+mlrMBO, ZO-AdaMM, ABBO, DiBB, SPBOpt, B2Opt, Surr-RLDE, RBO, CAS-MORE, LB-SGD, PIBB, and Q-Mamba—on the NeurIPS BBO Challenge and the MetaBox benchmark. Results demonstrate substantial improvements in convergence speed and robustness across diverse problem classes. The framework establishes a scalable, adaptive paradigm for real-world BBO, advancing beyond conventional heuristics toward data-driven, architecture-aware optimization.

Technology Category

Application Category

📝 Abstract
Black-box optimization (BBO) addresses problems where objectives are accessible only through costly queries without gradients or explicit structure. Classical derivative-free methods -- line search, direct search, and model-based solvers such as Bayesian optimization -- form the backbone of BBO, yet often struggle in high-dimensional, noisy, or mixed-integer settings. Recent advances use machine learning (ML) and reinforcement learning (RL) to enhance BBO: ML provides expressive surrogates, adaptive updates, meta-learning portfolios, and generative models, while RL enables dynamic operator configuration, robustness, and meta-optimization across tasks. This paper surveys these developments, covering representative algorithms such as NNs with the modular model-based optimization framework (mlrMBO), zeroth-order adaptive momentum methods (ZO-AdaMM), automated BBO (ABBO), distributed block-wise optimization (DiBB), partition-based Bayesian optimization (SPBOpt), the transformer-based optimizer (B2Opt), diffusion-model-based BBO, surrogate-assisted RL for differential evolution (Surr-RLDE), robust BBO (RBO), coordinate-ascent model-based optimization with relative entropy (CAS-MORE), log-barrier stochastic gradient descent (LB-SGD), policy improvement with black-box (PIBB), and offline Q-learning with Mamba backbones (Q-Mamba). We also review benchmark efforts such as the NeurIPS 2020 BBO Challenge and the MetaBox framework. Overall, we highlight how ML and RL transform classical inexact solvers into more scalable, robust, and adaptive frameworks for real-world optimization.
Problem

Research questions and friction points this paper is trying to address.

Enhancing black-box optimization scalability in high-dimensional noisy settings
Developing ML surrogates and RL methods for derivative-free optimization
Improving robustness and adaptiveness of classical optimization solvers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Machine learning enhances black-box optimization with surrogates
Reinforcement learning enables dynamic operator configuration
Transformer and diffusion models improve scalability and robustness