Quantum King-Ring Domination in Chess: A QAOA Approach

πŸ“… 2026-01-01
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the lack of semantically structured and human-interpretable benchmarks for evaluating quantum optimization algorithms on real-world constrained problems. We propose QKRD, the first NISQ-scale structured benchmark derived from chess tactical positions, comprising 5,000 problem instances that embed one-hot constraints and spatial locality, along with an integrated validation mechanism. Using this benchmark, we systematically evaluate the Quantum Approximate Optimization Algorithm (QAOA) with constraint-preserving mixers (XY and domain-wall), warm-start initialization, and Conditional Value-at-Risk (CVaR) optimization. Our experiments demonstrate that constraint-preserving mixers accelerate convergence by an average of 13 layers, while warm starts further reduce required layers by 45 and significantly improve energy quality. QAOA outperforms greedy heuristics by 12.6% and random selection by 80.1%, highlighting the critical advantage of problem-aware strategies on structured instances.

Technology Category

Application Category

πŸ“ Abstract
The Quantum Approximate Optimization Algorithm (QAOA) is extensively benchmarked on synthetic random instances such as MaxCut, TSP, and SAT problems, but these lack semantic structure and human interpretability, offering limited insight into performance on real-world problems with meaningful constraints. We introduce Quantum King-Ring Domination (QKRD), a NISQ-scale benchmark derived from chess tactical positions that provides 5,000 structured instances with one-hot constraints, spatial locality, and 10--40 qubit scale. The benchmark pairs human-interpretable coverage metrics with intrinsic validation against classical heuristics, enabling algorithmic conclusions without external oracles. Using QKRD, we systematically evaluate QAOA design choices and find that constraint-preserving mixers (XY, domain-wall) converge approximately 13 steps faster than standard mixers (p<10^{-7}, d\approx0.5) while eliminating penalty tuning, warm-start strategies reduce convergence by 45 steps (p<10^{-127}, d=3.35) with energy improvements exceeding d=8, and Conditional Value-at-Risk (CVaR) optimization yields an informative negative result with worse energy (p<10^{-40}, d=1.21) and no coverage benefit. Intrinsic validation shows QAOA outperforms greedy heuristics by 12.6\% and random selection by 80.1\%. Our results demonstrate that structured benchmarks reveal advantages of problem-informed QAOA techniques obscured in random instances. We release all code, data, and experimental artifacts for reproducible NISQ algorithm research.
Problem

Research questions and friction points this paper is trying to address.

QAOA
benchmark
structured instances
human interpretability
NISQ
Innovation

Methods, ideas, or system contributions that make the work stand out.

QAOA
structured benchmark
constraint-preserving mixer
NISQ
chess-inspired optimization
πŸ”Ž Similar Papers
No similar papers found.
Gerhard Stenzel
Gerhard Stenzel
Phd Student, LMU Munich
quantum machine learningoptimizationcomputer science
M
Michael Kolle
LMU Munich, Department of Computer Science, Chair of Mobile and Distributed Systems
Tobias Rohe
Tobias Rohe
Ludwig-Maximilians UniversitΓ€t
Quantum ComputingQuantum ApplicationsOptimization
J
Julian Hager
LMU Munich, Department of Computer Science, Chair of Mobile and Distributed Systems
L
Leo Sunkel
LMU Munich, Department of Computer Science, Chair of Mobile and Distributed Systems
Maximilian Zorn
Maximilian Zorn
PhD. Student, Mobile and Distributed Systems Group, LMU Munich
Machine LearningArtificial IntelligenceQuantum Computing
C
Claudia Linnhoff-Popien
LMU Munich, Department of Computer Science, Chair of Mobile and Distributed Systems