🤖 AI Summary
The Quantum Approximate Optimization Algorithm (QAOA) suffers from slow convergence and poor solution quality on near-term quantum hardware due to highly non-convex energy landscapes, making variational parameter optimization challenging.
Method: We propose the first meta-learning framework for QAOA, introducing the Quantum Kernel Long Short-Term Memory (QK-LSTM) network as a lightweight meta-learner—featuring only 43 trainable parameters—that enables efficient, problem-scale-agnostic parameter initialization. QK-LSTM integrates quantum kernel methods with recurrent neural network architecture, supporting joint classical–quantum sequence training and learning an adaptive optimization strategy.
Contribution/Results: Evaluated on Max-Cut, QK-LSTM significantly improves approximation ratios and convergence speed. It demonstrates strong generalization across problem sizes (n = 10–13) and cross-scale acceleration, achieving, for the first time, fully transferable QAOA parameter initialization—i.e., initializations trained on small instances generalize effectively to larger, unseen ones.
📝 Abstract
The Quantum Approximate Optimization Algorithm (QAOA) is a leading approach for solving combinatorial optimization problems on near-term quantum processors. However, finding good variational parameters remains a significant challenge due to the non-convex energy landscape, often resulting in slow convergence and poor solution quality. In this work, we propose a quantum meta-learning framework that trains advanced quantum sequence models to generate effective parameter initialization policies. We investigate four classical or quantum sequence models, including the Quantum Kernel-based Long Short-Term Memory (QK-LSTM), as learned optimizers in a "learning to learn" paradigm. Our numerical experiments on the Max-Cut problem demonstrate that the QK-LSTM optimizer achieves superior performance, obtaining the highest approximation ratios and exhibiting the fastest convergence rate across all tested problem sizes (n=10 to 13). Crucially, the QK-LSTM model achieves perfect parameter transferability by synthesizing a single, fixed set of near-optimal parameters, leading to a remarkable sustained acceleration of convergence even when generalizing to larger problems. This capability, enabled by the compact and expressive power of the quantum kernel architecture, underscores its effectiveness. The QK-LSTM, with only 43 trainable parameters, substantially outperforms the classical LSTM (56 parameters) and other quantum sequence models, establishing a robust pathway toward highly efficient parameter initialization for variational quantum algorithms in the NISQ era.