๐ค AI Summary
Large language models (LLMs) exhibit low accuracy in generating executable Qiskit quantum code, particularly failing to produce circuits compatible with real quantum hardware. Method: This paper proposes a quantum-aware alignment framework comprising three components: (1) a synthetic problem-test-pair data pipeline; (2) a quantum-verifiable reward mechanism that incorporates direct execution feedback from physical quantum devices into training; and (3) preference alignment via a hybrid approach combining Direct Preference Optimization (DPO) and Group Relative Policy Optimization (GRPO). Contribution/Results: To our knowledge, this is the first work to enable quantum hardware feedbackโdriven optimization of LLM-generated quantum code. Evaluated on the Qiskit-HumanEval-hard benchmark, our method significantly outperforms the strongest open-source baselines, achieving new state-of-the-art performance in both functional correctness and hardware executability of generated Qiskit code.
๐ Abstract
Qiskit is an open-source quantum computing framework that allows users to design, simulate, and run quantum circuits on real quantum hardware. We explore post-training techniques for LLMs to assist in writing Qiskit code. We introduce quantum verification as an effective method for ensuring code quality and executability on quantum hardware. To support this, we developed a synthetic data pipeline that generates quantum problem-unit test pairs and used it to create preference data for aligning LLMs with DPO. Additionally, we trained models using GRPO, leveraging quantum-verifiable rewards provided by the quantum hardware. Our best-performing model, combining DPO and GRPO, surpasses the strongest open-source baselines on the challenging Qiskit-HumanEval-hard benchmark.