QCoder Benchmark: Bridging Language Generation and Quantum Hardware through Simulator-Based Feedback

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack hardware-aware evaluation frameworks for quantum programming code generation. Method: This paper introduces the first benchmarking framework integrating quantum simulator feedback, featuring domain-specific metrics—including circuit depth, execution time, and error classification—and employing both quantitative and qualitative analysis against human implementations from real quantum programming competitions. The framework enables closed-loop integration via Python interfaces, linking LLM code generation, quantum circuit compilation, simulation-based verification, and performance analytics. Contribution/Results: Experiments show GPT-4o achieves only 18.97% accuracy, whereas the inference-enhanced model o3 attains 78.0%, significantly surpassing the human average (39.98%). The project open-sources its benchmark dataset and API, establishing a foundation for LLM-driven, hardware-coordinated quantum programming research.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have increasingly been applied to automatic programming code generation. This task can be viewed as a language generation task that bridges natural language, human knowledge, and programming logic. However, it remains underexplored in domains that require interaction with hardware devices, such as quantum programming, where human coders write Python code that is executed on a quantum computer. To address this gap, we introduce QCoder Benchmark, an evaluation framework that assesses LLMs on quantum programming with feedback from simulated hardware devices. Our benchmark offers two key features. First, it supports evaluation using a quantum simulator environment beyond conventional Python execution, allowing feedback of domain-specific metrics such as circuit depth, execution time, and error classification, which can be used to guide better generation. Second, it incorporates human-written code submissions collected from real programming contests, enabling both quantitative comparisons and qualitative analyses of LLM outputs against human-written codes. Our experiments reveal that even advanced models like GPT-4o achieve only around 18.97% accuracy, highlighting the difficulty of the benchmark. In contrast, reasoning-based models such as o3 reach up to 78% accuracy, outperforming averaged success rates of human-written codes (39.98%). We release the QCoder Benchmark dataset and public evaluation API to support further research.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs on quantum programming with hardware simulation feedback
Assessing code quality using domain-specific quantum computing metrics
Comparing LLM-generated quantum code against human-written contest submissions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses quantum simulator feedback for code evaluation
Incorporates human-written contest codes for comparison
Benchmarks LLMs on quantum programming with hardware metrics
🔎 Similar Papers
No similar papers found.
T
Taku Mikuriya
National Institute of Advanced Industrial Science and Technology (AIST)
Tatsuya Ishigaki
Tatsuya Ishigaki
National Institute of Advanced Industrial Science and Technology (AIST)
Natural Language ProcessingText GenerationText Summarization
M
Masayuki Kawarada
National Institute of Advanced Industrial Science and Technology (AIST)
S
Shunya Minami
National Institute of Advanced Industrial Science and Technology (AIST)
Tadashi Kadowaki
Tadashi Kadowaki
Unknown affiliation
Y
Yohichi Suzuki
National Institute of Advanced Industrial Science and Technology (AIST)
S
Soshun Naito
The University of Tokyo
S
Shunya Takata
Keio University
T
Takumi Kato
NTT DATA GROUP Corporation
T
Tamotsu Basseda
Miletos inc.
R
Reo Yamada
University of Tsukuba
H
Hiroya Takamura
National Institute of Advanced Industrial Science and Technology (AIST)