🤖 AI Summary
This work addresses the theoretical guarantees on Pareto front approximation quality for multi-objective evolutionary algorithms (MOEAs), using the classical OneMinMax benchmark. Through rigorous runtime analysis, we establish—for the first time—that the steady-state SPEA2 algorithm, augmented with the σ-distance mechanism, computes an optimal approximation of the Pareto front in polynomial time. In contrast, an analogous variant of NSGA-II achieves only an approximation ratio of approximately 2 and cannot attain the same approximation quality within polynomial time. Our analysis fills a fundamental theoretical gap regarding the convergence behavior of SPEA2, revealing its intrinsic advantage in approximation accuracy over NSGA-II. The theoretical findings are corroborated by extensive empirical validation, demonstrating strong consistency between analytical predictions and experimental results. This work thus provides the first provable guarantee of optimal Pareto front approximation for SPEA2 and advances the theoretical understanding of MOEA performance on discrete multi-objective optimization problems.
📝 Abstract
Together with the NSGA-II and SMS-EMOA, the strength Pareto evolutionary algorithm 2 (SPEA2) is one of the most prominent dominance-based multi-objective evolutionary algorithms (MOEAs). Different from the NSGA-II, it does not employ the crowding distance (essentially the distance to neighboring solutions) to compare pairwise non-dominating solutions but a complex system of $sigma$-distances that builds on the distances to all other solutions. In this work, we give a first mathematical proof showing that this more complex system of distances can be superior. More specifically, we prove that a simple steady-state SPEA2 can compute optimal approximations of the Pareto front of the OneMinMax benchmark in polynomial time. The best proven guarantee for a comparable variant of the NSGA-II only assures approximation ratios of roughly a factor of two, and both mathematical analyses and experiments indicate that optimal approximations are not found efficiently.