Reinforcement Learning for Parameterized Quantum State Preparation: A Comparative Study

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the efficient preparation of quantum states with continuous parameters using reinforcement learning (RL), balancing accuracy and scalability. It extends RL beyond discrete gate selection to the synthesis of parameterized quantum circuits, systematically comparing single-stage and two-stage strategies in jointly optimizing gate types, target qubits, and rotation angles. Leveraging proximal policy optimization (PPO) and advantage actor-critic (A2C) algorithms within the Gymnasium and PennyLane frameworks, the approach employs parameter-shift gradients and the Adam optimizer for training. Experimental results demonstrate that PPO under a single-stage strategy reliably prepares computational basis states (fidelity 83%–99%) and Bell states (61%–77%), though scalability remains limited (λ≈3–4), with ten-qubit tasks still posing significant challenges.

Technology Category

Application Category

📝 Abstract
We extend directed quantum circuit synthesis (DQCS) with reinforcement learning from purely discrete gate selection to parameterized quantum state preparation with continuous single-qubit rotations \(R_x\), \(R_y\), and \(R_z\). We compare two training regimes: a one-stage agent that jointly selects the gate type, the affected qubit(s), and the rotation angle; and a two-stage variant that first proposes a discrete circuit and subsequently optimizes the rotation angles with Adam using parameter-shift gradients. Using Gymnasium and PennyLane, we evaluate Proximal Policy Optimization (PPO) and Advantage Actor--Critic (A2C) on systems comprising two to ten qubits and on targets of increasing complexity with \(λ\) ranging from one to five. Whereas A2C does not learn effective policies in this setting, PPO succeeds under stable hyperparameters (one-stage: learning rate approximately \(5\times10^{-4}\) with a self-fidelity-error threshold of 0.01; two-stage: learning rate approximately \(10^{-4}\)). Both approaches reliably reconstruct computational basis states (between 83\% and 99\% success) and Bell states (between 61\% and 77\% success). However, scalability saturates for \(λ\) of approximately three to four and does not extend to ten-qubit targets even at \(λ=2\). The two-stage method offers only marginal accuracy gains while requiring around three times the runtime. For practicality under a fixed compute budget, we therefore recommend the one-stage PPO policy, provide explicit synthesized circuits, and contrast with a classical variational baseline to outline avenues for improved scalability.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Parameterized Quantum State Preparation
Quantum Circuit Synthesis
Scalability
Continuous Rotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Parameterized Quantum Circuits
Quantum State Preparation
Proximal Policy Optimization
Continuous Gate Optimization
🔎 Similar Papers
No similar papers found.
Gerhard Stenzel
Gerhard Stenzel
Phd Student, LMU Munich
quantum machine learningoptimizationcomputer science
I
Isabella Debelic
LMU Munich, Department of Computer Science, Chair of Mobile and Distributed Systems
Michael Kölle
Michael Kölle
LMU Munich
Quantum Artificial IntelligenceMulti Agent SystemsReinforcement Learning
Tobias Rohe
Tobias Rohe
Ludwig-Maximilians Universität
Quantum ComputingQuantum ApplicationsOptimization
Leo Sünkel
Leo Sünkel
LMU Munich
J
Julian Hager
LMU Munich, Department of Computer Science, Chair of Mobile and Distributed Systems
C
Claudia Linnhoff-Popien
LMU Munich, Department of Computer Science, Chair of Mobile and Distributed Systems