When Switching Algorithms Helps: A Theoretical Study of Online Algorithm Selection

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-standing lack of theoretical foundations in online algorithm selection (OAS), particularly the absence of non-artificial problem instances demonstrating asymptotic speedups and principled switching strategies. The study introduces OneMax—a natural benchmark problem—and designs a practical switching strategy between the $(1+\lambda)$ EA and the $(1+(\lambda,\lambda))$ GA. By integrating fixed-start and fixed-target analytical perspectives, it reveals complementary strengths of the two algorithms across different optimization phases. Through rigorous probabilistic analysis and runtime complexity theory, the proposed strategy achieves an expected optimization time of $O(n \log \log n)$, significantly improving upon the best-known bound of $\Theta\left(n \sqrt{ \frac{ \log n \log \log \log n}{ \log \log n}}\right)$ for either algorithm used in isolation. This constitutes the first non-artificial theoretical evidence of asymptotic acceleration in OAS.
📝 Abstract
Online algorithm selection (OAS) aims to adapt the optimization process to changes in the fitness landscape and is expected to outperform any single algorithm from a given portfolio. Although this expectation is supported by numerous empirical studies, there are currently no theoretical results proving that OAS can yield asymptotic speedups (apart from some artificial examples for hyper-heuristics). Moreover, theory-based guidelines for when and how to switch between algorithms are largely missing. In this paper, we present the first theoretical example in which switching between two algorithms -- the $(1+λ)$ EA and the $(1+(λ,λ))$ GA -- solves the OneMax problem asymptotically faster than either algorithm used in isolation. We show that an appropriate choice of population sizes for the two algorithms allows the optimum to be reached in $O(n\log\log n)$ expected time, faster than the $Θ(n\sqrt{\frac{\log n \log\log\log n}{\log\log n}})$ runtime of the best of these two algorithms with optimally tuned parameters. We first establish this bound under an idealized switching rule that changes from the $(1+λ)$ to the $(1+(λ,λ))$ GA at the optimal time. We then propose a realistic switching strategy that achieves the same performance. Our analysis combines fixed-start and fixed-target perspectives, illustrating how different algorithms dominate at different stages of the optimization process. This approach offers a promising path toward a deeper theoretical understanding of OAS.
Problem

Research questions and friction points this paper is trying to address.

Online Algorithm Selection
Asymptotic Speedup
Algorithm Switching
Theoretical Analysis
Fitness Landscape
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online Algorithm Selection
Evolutionary Algorithms
Asymptotic Speedup
OneMax Problem
Algorithm Switching Strategy
🔎 Similar Papers