Accelerating Feedback-based Algorithms for Quantum Optimization Using Gradient Descent

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Feedback-based methods have gained significant attention as an alternative training paradigm for the Quantum Approximate Optimization Algorithm (QAOA) in solving combinatorial optimization problems such as MAX-CUT. In particular, Quantum Lyapunov Control (QLC) employs feedback-driven control laws that guarantee monotonic non-decreasing objective values, can substantially reduce the training overhead of QAOA, and mitigate barren plateaus. However, these methods might require long control sequences, leading to sub-optimal convergence rates. In this work, we propose a hybrid method that incorporates per-layer gradient estimation to accelerate the convergence of QLC while preserving its low training overhead and stability guarantees. By leveraging layer-wise gradient information, the proposed approach selects near-optimal control parameters, resulting in significantly faster convergence and improved robustness. We validate the effectiveness of the method through extensive numerical experiments across a range of problem instances and optimization settings.
Problem

Research questions and friction points this paper is trying to address.

Quantum Optimization
Feedback-based Algorithms
Convergence Rate
QAOA
Quantum Lyapunov Control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum Approximate Optimization Algorithm
Quantum Lyapunov Control
gradient descent
feedback-based control
layer-wise gradient estimation
🔎 Similar Papers
No similar papers found.
M
Masih Mozakka
Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington, Bloomington, IN, United States
Mohsen Heidari
Mohsen Heidari
Assistant Professor at Indiana University Bloomington
Quantum ComputingMachine LearningQuantum Information Theory