🤖 AI Summary
Large language models (LLMs) lack hardware-aware evaluation frameworks for quantum programming code generation. Method: This paper introduces the first benchmarking framework integrating quantum simulator feedback, featuring domain-specific metrics—including circuit depth, execution time, and error classification—and employing both quantitative and qualitative analysis against human implementations from real quantum programming competitions. The framework enables closed-loop integration via Python interfaces, linking LLM code generation, quantum circuit compilation, simulation-based verification, and performance analytics. Contribution/Results: Experiments show GPT-4o achieves only 18.97% accuracy, whereas the inference-enhanced model o3 attains 78.0%, significantly surpassing the human average (39.98%). The project open-sources its benchmark dataset and API, establishing a foundation for LLM-driven, hardware-coordinated quantum programming research.
📝 Abstract
Large language models (LLMs) have increasingly been applied to automatic programming code generation. This task can be viewed as a language generation task that bridges natural language, human knowledge, and programming logic. However, it remains underexplored in domains that require interaction with hardware devices, such as quantum programming, where human coders write Python code that is executed on a quantum computer. To address this gap, we introduce QCoder Benchmark, an evaluation framework that assesses LLMs on quantum programming with feedback from simulated hardware devices. Our benchmark offers two key features. First, it supports evaluation using a quantum simulator environment beyond conventional Python execution, allowing feedback of domain-specific metrics such as circuit depth, execution time, and error classification, which can be used to guide better generation. Second, it incorporates human-written code submissions collected from real programming contests, enabling both quantitative comparisons and qualitative analyses of LLM outputs against human-written codes. Our experiments reveal that even advanced models like GPT-4o achieve only around 18.97% accuracy, highlighting the difficulty of the benchmark. In contrast, reasoning-based models such as o3 reach up to 78% accuracy, outperforming averaged success rates of human-written codes (39.98%). We release the QCoder Benchmark dataset and public evaluation API to support further research.