🤖 AI Summary
To address the rapidly escalating parameter optimization cost of the Quantum Approximate Optimization Algorithm (QAOA) with increasing circuit depth, this work introduces Principal Component Analysis (PCA) into the QAOA parameter optimization pipeline for the first time. We propose a method that learns dominant principal components from small-scale problem instances and transfers them to large-scale instances via parameter-space dimensionality reduction and reparameterization. Within a classical-quantum hybrid framework, our approach significantly compresses the dimensionality of the optimization space, thereby reducing the number of iterations required by classical optimizers. On MaxCut benchmarks, our method achieves higher approximation ratios than standard QAOA under matched parameter counts; alternatively, it reduces optimization iterations by approximately 40–60% at the cost of only marginal solution-quality degradation (<1%). This enables efficient cross-scale parameter transfer and establishes a scalable, PCA-augmented optimization paradigm for practical QAOA deployment.
📝 Abstract
The Quantum Approximate Optimization Algorithm (QAOA) is a promising variational algorithm for solving combinatorial optimization problems on near-term devices. However, as the number of layers in a QAOA circuit increases, which is correlated with the quality of the solution, the number of parameters to optimize grows linearly. This results in more iterations required by the classical optimizer, which results in an increasing computational burden as more circuit executions are needed. To mitigate this issue, we introduce QAOA-PCA, a novel reparameterization technique that employs Principal Component Analysis (PCA) to reduce the dimensionality of the QAOA parameter space. By extracting principal components from optimized parameters of smaller problem instances, QAOA-PCA facilitates efficient optimization with fewer parameters on larger instances. Our empirical evaluation on the prominent MaxCut problem demonstrates that QAOA-PCA consistently requires fewer iterations than standard QAOA, achieving substantial efficiency gains. While this comes at the cost of a slight reduction in approximation ratio compared to QAOA with the same number of layers, QAOA-PCA almost always outperforms standard QAOA when matched by parameter count. QAOA-PCA strikes a favorable balance between efficiency and performance, reducing optimization overhead without significantly compromising solution quality.