Randomness as Reference: Benchmark Metric for Optimization in Engineering

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing optimization algorithm benchmarks often lack engineering realism, failing to reflect practical performance. Method: We construct a highly diverse benchmark suite comprising 231 continuous, unconstrained engineering optimization problems derived from real-world CFD and FEA modeling. We further propose, for the first time, a nonlinear normalization technique for performance evaluation, using random sampling as a statistical baseline—enabling unbiased, reproducible efficiency comparisons across heterogeneous problems. Contribution/Results: Based on hundreds of independent algorithm runs, we robustly evaluate 20 deterministic and stochastic optimizers. Results reveal that most widely used metaheuristics exhibit significantly suboptimal efficiency on engineering problems, while only a few demonstrate superior performance. This benchmark framework enhances the authenticity, transparency, and practical relevance of algorithm assessment, establishing a new paradigm for selecting and improving optimization algorithms tailored to engineering applications.

Technology Category

Application Category

📝 Abstract
Benchmarking optimization algorithms is fundamental for the advancement of computational intelligence. However, widely adopted artificial test suites exhibit limited correspondence with the diversity and complexity of real-world engineering optimization tasks. This paper presents a new benchmark suite comprising 231 bounded, continuous, unconstrained optimization problems, the majority derived from engineering design and simulation scenarios, including computational fluid dynamics and finite element analysis models. In conjunction with this suite, a novel performance metric is introduced, which employs random sampling as a statistical reference, providing nonlinear normalization of objective values and enabling unbiased comparison of algorithmic efficiency across heterogeneous problems. Using this framework, 20 deterministic and stochastic optimization methods were systematically evaluated through hundreds of independent runs per problem, ensuring statistical robustness. The results indicate that only a few of the tested optimization methods consistently achieve excellent performance, while several commonly used metaheuristics exhibit severe efficiency loss on engineering-type problems, emphasizing the limitations of conventional benchmarks. Furthermore, the conducted tests are used for analyzing various features of the optimization methods, providing practical guidelines for their application. The proposed test suite and metric together offer a transparent, reproducible, and practically relevant platform for evaluating and comparing optimization methods, thereby narrowing the gap between the available benchmark tests and realistic engineering applications.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking optimization algorithms lacks real-world engineering diversity and complexity
Evaluating algorithmic efficiency across heterogeneous problems requires unbiased comparison metrics
Conventional benchmarks fail to reveal performance limitations on engineering-type problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Engineering-derived benchmark suite with 231 problems
Random sampling metric for unbiased algorithm comparison
Systematic evaluation of 20 optimization methods
🔎 Similar Papers
No similar papers found.