🤖 AI Summary
Automated generation of large-scale parameterized quantum circuits for quantum optimization remains challenging. Method: This work pioneers the systematic fine-tuning of large language models (LLMs) for generating OpenQASM 3.0–compliant quantum circuits. It introduces a domain-informed data construction mechanism—integrating knowledge from QAOA, VQE, and adaptive VQE—and a syntax-constrained modeling approach to explicitly inject quantum-computing priors. Results: The LLM is fine-tuned on 14,000 synthetic benchmarks spanning 12 optimization problem classes. Generated circuits achieve 100% syntactic correctness and significantly outperform both random baselines and current state-of-the-art models in parameter quality. The approach directly enables quantum machine learning template construction and compiler benchmarking, establishing a novel paradigm for LLM-driven quantum software engineering.
📝 Abstract
Large language models (LLM) have achieved remarkable outcomes in addressing complex problems, including math, coding, and analyzing large amounts of scientific reports. Yet few works have explored the potential of LLM in quantum computing. The most challenging problem is how to leverage LLMs to automatically generate quantum circuits at a large scale. In this paper, we address such a challenge by fine-tuning LLMs and injecting the domain-specific knowledge of quantum computing. In particular, we investigate the mechanisms to generate training data sets and construct the end-to-end pipeline to fine-tune pre-trained LLMs that produce parameterized quantum circuits for optimization problems. We have prepared 14,000 quantum circuits covering a substantial part of the quantum optimization landscape: 12 optimization problem instances and their optimized QAOA, VQE, and adaptive VQE circuits. The fine-tuned LLMs can construct syntactically correct parametrized quantum circuits in the most recent OpenQASM 3.0. We have evaluated the quality of the parameters by comparing them to the optimized expectation values and distributions. Our evaluation shows that the fine-tuned LLM outperforms state-of-the-art models and that the parameters are better than random. The LLM-generated parametrized circuits and initial parameters can be used as a starting point for further optimization, emph{e.g.,} templates in quantum machine learning and the benchmark for compilers and hardware.