π€ AI Summary
This work addresses the lack of effective exploratory landscape features in existing automated algorithm selection methods for multi-objective optimization. It systematically introduces, for the first time, five complementary feature categories that comprehensively characterize the difficulty of continuous multi-objective optimization problems. These features are derived through random sampling in both decision and objective spaces, combined with non-dominated sorting, descriptive statistics, principal component analysis, graph-based modeling, and gradient estimation. Evaluated on standard benchmark suites, the proposed features significantly enhance algorithm selection performance, consistently ranking among the most important predictors and enabling selector models to approach the performance of the virtual best solver. This contribution provides a novel and effective toolkit for automated algorithm configuration in multi-objective optimization.
π Abstract
Automated Algorithm Selection (AAS) is a popular meta-algorithmic approach and has demonstrated to work well for single-objective optimisation in combination with exploratory landscape features (ELA), i.e., (numerical) descriptive features derived from sampling the black-box (continuous) optimisation problem. In contrast to the abundance of features that describe single-objective optimisation problems, only a few features have been proposed for multi-objective optimisation so far. Building upon recent work on exploratory landscape features for box-constrained continuous multi-objective optimization problems, we propose a novel and complementary set of additional features (MO-ELA). These features are based on a random sample of points considering both the decision and objective space. The features are divided into 5 feature groups depending on how they are being calculated: non-dominated-sorting, descriptive statistics, principal component analysis, graph structures and gradient information. An AAS study conducted on well-established multi-objective benchmarks demonstrates that the proposed features contribute to successfully distinguishing between algorithm performance and thus adequately capture problem hardness resulting in models that come very close to the virtual best solver. After feature selection, the newly proposed features are frequently among the top contributors, underscoring their value in algorithm selection and problem characterisation.