🤖 AI Summary
Existing optimization algorithm benchmarks often lack engineering realism, failing to reflect practical performance. Method: We construct a highly diverse benchmark suite comprising 231 continuous, unconstrained engineering optimization problems derived from real-world CFD and FEA modeling. We further propose, for the first time, a nonlinear normalization technique for performance evaluation, using random sampling as a statistical baseline—enabling unbiased, reproducible efficiency comparisons across heterogeneous problems. Contribution/Results: Based on hundreds of independent algorithm runs, we robustly evaluate 20 deterministic and stochastic optimizers. Results reveal that most widely used metaheuristics exhibit significantly suboptimal efficiency on engineering problems, while only a few demonstrate superior performance. This benchmark framework enhances the authenticity, transparency, and practical relevance of algorithm assessment, establishing a new paradigm for selecting and improving optimization algorithms tailored to engineering applications.
📝 Abstract
Benchmarking optimization algorithms is fundamental for the advancement of computational intelligence. However, widely adopted artificial test suites exhibit limited correspondence with the diversity and complexity of real-world engineering optimization tasks. This paper presents a new benchmark suite comprising 231 bounded, continuous, unconstrained optimization problems, the majority derived from engineering design and simulation scenarios, including computational fluid dynamics and finite element analysis models. In conjunction with this suite, a novel performance metric is introduced, which employs random sampling as a statistical reference, providing nonlinear normalization of objective values and enabling unbiased comparison of algorithmic efficiency across heterogeneous problems. Using this framework, 20 deterministic and stochastic optimization methods were systematically evaluated through hundreds of independent runs per problem, ensuring statistical robustness. The results indicate that only a few of the tested optimization methods consistently achieve excellent performance, while several commonly used metaheuristics exhibit severe efficiency loss on engineering-type problems, emphasizing the limitations of conventional benchmarks. Furthermore, the conducted tests are used for analyzing various features of the optimization methods, providing practical guidelines for their application. The proposed test suite and metric together offer a transparent, reproducible, and practically relevant platform for evaluating and comparing optimization methods, thereby narrowing the gap between the available benchmark tests and realistic engineering applications.