Meta-Learning for Quantum Optimization via Quantum Sequence Model

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The Quantum Approximate Optimization Algorithm (QAOA) suffers from slow convergence and poor solution quality on near-term quantum hardware due to highly non-convex energy landscapes, making variational parameter optimization challenging. Method: We propose the first meta-learning framework for QAOA, introducing the Quantum Kernel Long Short-Term Memory (QK-LSTM) network as a lightweight meta-learner—featuring only 43 trainable parameters—that enables efficient, problem-scale-agnostic parameter initialization. QK-LSTM integrates quantum kernel methods with recurrent neural network architecture, supporting joint classical–quantum sequence training and learning an adaptive optimization strategy. Contribution/Results: Evaluated on Max-Cut, QK-LSTM significantly improves approximation ratios and convergence speed. It demonstrates strong generalization across problem sizes (n = 10–13) and cross-scale acceleration, achieving, for the first time, fully transferable QAOA parameter initialization—i.e., initializations trained on small instances generalize effectively to larger, unseen ones.

Technology Category

Application Category

📝 Abstract
The Quantum Approximate Optimization Algorithm (QAOA) is a leading approach for solving combinatorial optimization problems on near-term quantum processors. However, finding good variational parameters remains a significant challenge due to the non-convex energy landscape, often resulting in slow convergence and poor solution quality. In this work, we propose a quantum meta-learning framework that trains advanced quantum sequence models to generate effective parameter initialization policies. We investigate four classical or quantum sequence models, including the Quantum Kernel-based Long Short-Term Memory (QK-LSTM), as learned optimizers in a "learning to learn" paradigm. Our numerical experiments on the Max-Cut problem demonstrate that the QK-LSTM optimizer achieves superior performance, obtaining the highest approximation ratios and exhibiting the fastest convergence rate across all tested problem sizes (n=10 to 13). Crucially, the QK-LSTM model achieves perfect parameter transferability by synthesizing a single, fixed set of near-optimal parameters, leading to a remarkable sustained acceleration of convergence even when generalizing to larger problems. This capability, enabled by the compact and expressive power of the quantum kernel architecture, underscores its effectiveness. The QK-LSTM, with only 43 trainable parameters, substantially outperforms the classical LSTM (56 parameters) and other quantum sequence models, establishing a robust pathway toward highly efficient parameter initialization for variational quantum algorithms in the NISQ era.
Problem

Research questions and friction points this paper is trying to address.

Optimizes QAOA parameter initialization for combinatorial problems
Accelerates convergence and improves solution quality in Max-Cut
Enables transferable parameters across varying problem sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum meta-learning framework trains quantum sequence models
QK-LSTM optimizer achieves superior performance and fastest convergence
Synthesizes single fixed near-optimal parameters for perfect transferability
🔎 Similar Papers
No similar papers found.
Y
Yu-Cheng Lin
Department of Electrophysics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
Y
Yu-Chao Hsu
National Center for High-Performance Computing, National Institutes of Applied Research, Hsinchu, Taiwan
Samuel Yen-Chi Chen
Samuel Yen-Chi Chen
Wells Fargo
quantum computationquantum informationmachine learningquantum machine learning