Speeding Up the NSGA-II via Dynamic Population Sizes

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow convergence of NSGA-II caused by its fixed population size, this paper introduces, for the first time, a dynamic population mechanism: the initial population size is set to 4 and doubles every τ fitness evaluations until reaching an upper bound μ. Rigorous runtime analysis establishes an expected runtime of O(n log n) on the OneMinMax problem—improving upon the best-known bound for classical NSGA-II by a factor of Θ(n). Furthermore, we propose a parameter-free parallel variant that preserves the Θ(n / log n) speedup while incurring only O(log n) additional overhead. Our core contributions are: (1) the first theoretical analysis proving runtime acceleration for NSGA-II via dynamic population sizing; and (2) an adaptive population control framework that bridges theoretical rigor with practical applicability.

Technology Category

Application Category

📝 Abstract
Multi-objective evolutionary algorithms (MOEAs) are among the most widely and successfully applied optimizers for multi-objective problems. However, to store many optimal trade-offs (the Pareto optima) at once, MOEAs are typically run with a large, static population of solution candidates, which can slow down the algorithm. We propose the dynamic NSGA-II (dNSGA-II), which is based on the popular NSGA-II and features a non-static population size. The dNSGA-II starts with a small initial population size of four and doubles it after a user-specified number $τ$ of function evaluations, up to a maximum size of $μ$. Via a mathematical runtime analysis, we prove that the dNSGA-II with parameters $μgeq 4(n + 1)$ and $τgeq frac{256}{50} e n$ computes the full Pareto front of the extsc{OneMinMax} benchmark of size $n$ in $O(log(μ) τ+ μlog(n))$ function evaluations, both in expectation and with high probability. For an optimal choice of $μ$ and $τ$, the resulting $O(n log(n))$ runtime improves the optimal expected runtime of the classic NSGA-II by a factor of $Θ(n)$. In addition, we show that the parameter $τ$ can be removed when utilizing concurrent runs of the dNSGA-II. This approach leads to a mild slow-down by a factor of $O(log(n))$ compared to an optimal choice of $τ$ for the dNSGA-II, which is still a speed-up of $Θ(n / log(n))$ over the classic NSGA-II.
Problem

Research questions and friction points this paper is trying to address.

Speeding up NSGA-II with dynamic population sizes
Reducing runtime for multi-objective evolutionary algorithms
Optimizing Pareto front computation via parameter adjustment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic population size starting small then doubling
Mathematical runtime analysis proving efficiency improvement
Concurrent runs eliminate parameter dependency for flexibility