Geometric Learning in Black-Box Optimization: A GNN Framework for Algorithm Performance Prediction

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In black-box optimization, existing performance prediction methods predominantly rely on tabular representations of problem features, neglecting the critical influence of algorithm configurations. To address this limitation, we propose an end-to-end prediction framework based on heterogeneous graph neural networks (HGNNs), which explicitly models high-order interactions among problem features, algorithmic operators, and parameter configurations. This work is the first to introduce heterogeneous graph structures into algorithm performance prediction, thereby overcoming inherent limitations of conventional tabular modeling. Our approach integrates exploratory landscape analysis features with modular optimizers—specifically modCMA-ES and modDE—and evaluates performance on the BBOB benchmark comprising 24 functions. Experimental results demonstrate a maximum reduction of 36.6% in mean squared error (MSE) compared to state-of-the-art tabular learning methods, establishing significant empirical superiority.

Technology Category

Application Category

📝 Abstract
Automated algorithm performance prediction in numerical blackbox optimization often relies on problem characterizations, such as exploratory landscape analysis features. These features are typically used as inputs to machine learning models and are represented in a tabular format. However, such approaches often overlook algorithm configurations, a key factor influencing performance. The relationships between algorithm operators, parameters, problem characteristics, and performance outcomes form a complex structure best represented as a graph. This work explores the use of heterogeneous graph data structures and graph neural networks to predict the performance of optimization algorithms by capturing the complex dependencies between problems, algorithm configurations, and performance outcomes. We focus on two modular frameworks, modCMA-ES and modDE, which decompose two widely used derivative-free optimization algorithms: the covariance matrix adaptation evolution strategy (CMA-ES) and differential evolution (DE). We evaluate 324 modCMA-ES and 576 modDE variants on 24 BBOB problems across six runtime budgets and two problem dimensions. Achieving up to 36.6% improvement in MSE over traditional tabular-based methods, this work highlights the potential of geometric learning in black-box optimization.
Problem

Research questions and friction points this paper is trying to address.

Predict optimization algorithm performance using GNNs
Capture dependencies between problems and algorithm configurations
Improve accuracy over traditional tabular-based prediction methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses GNN for algorithm performance prediction
Leverages heterogeneous graph data structures
Focuses on modCMA-ES and modDE frameworks