Quantum Circuit Generation via test-time learning with large language models

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a black-box optimization framework based on large language models (LLMs) to overcome the performance bottlenecks of conventional methods in generating highly entangled quantum circuits. The approach formulates quantum circuit synthesis as a closed-loop test-time optimization problem: the LLM generates editing suggestions for gate sequences, which are evaluated by an external simulator using the Meyer–Wallach entanglement measure. Iterative refinement is achieved through a lightweight test-time learning strategy that integrates explicit memory traces, score-difference feedback prompting, and optimal restart sampling. Experiments on 20–25 qubit circuits demonstrate significant improvements in both entanglement quality and synthesis success rates. The study further reveals that high-performing circuits often correspond to stabilizer or graph states and highlights the critical influence of entanglement metric properties and prompt design on overall effectiveness.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) can generate structured artifacts, but using them as dependable optimizers for scientific design requires a mechanism for iterative improvement under black-box evaluation. Here, we cast quantum circuit synthesis as a closed-loop, test-time optimization problem: an LLM proposes edits to a fixed-length gate list, and an external simulator evaluates the resulting state with the Meyer-Wallach (MW) global entanglement measure. We introduce a lightweight test-time learning recipe that can reuse prior high-performing candidates as an explicit memory trace, augments prompts with a score-difference feedback, and applies restart-from-the-best sampling to escape potential plateaus. Across fixed 20-qubit settings, the loop without feedback and restart-from-the-best improves random initial circuits over a range of gate budgets. To lift up this performance and success rate, we use the full learning strategy. For the 25-qubit, it mitigates a pronounced performance plateau when naive querying is used. Beyond raw scores, we analyze the structure of synthesized states and find that high MW solutions can correspond to stabilizer or graph-state-like constructions, but full connectivity is not guaranteed due to the metric property and prompt design. These results illustrate both the promise and the pitfalls of memory evaluator-guided LLM optimization for circuit synthesis, highlighting the critical role of prior human-made theoretical theorems to optimally design a custom tool in support of research.
Problem

Research questions and friction points this paper is trying to address.

quantum circuit synthesis
large language models
black-box optimization
entanglement
test-time learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

test-time learning
large language models
quantum circuit synthesis
Meyer-Wallach entanglement
closed-loop optimization
A
Adriano Macarone-Palmieri
Dipartimento di Ingegneria, Università degli Studi di Palermo, Viale delle Scienze, 90128 Palermo, Italy
Rosario Lo Franco
Rosario Lo Franco
Dipartimento di Ingegneria, Università di Palermo, Italy
Open Quantum SystemsEntanglementQuantum CorrelationsQuantum State EngineeringQuantum