Circuit Partitioning Using Large Language Models for Quantum Compilation and Simulations

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In the NISQ era, optimizing noise-sensitive gates in large-scale quantum circuits remains intractable due to hardware constraints and compilation complexity. Method: This work pioneers the integration of large language models (LLMs) into automated quantum circuit partitioning—a critical preprocessing step for downstream gate-minimization compilation. Departing from conventional heuristic partitioning methods that disregard compilation objectives, we propose a synergistic framework comprising QASM-syntax-aware instruction fine-tuning, rapid policy distillation, and few-shot prompting, applied to open-source LLMs (e.g., Llama, Mistral) via supervised fine-tuning. Contribution/Results: Experimental evaluation demonstrates that the fine-tuned model achieves a partitioning accuracy of 53.4%, substantially outperforming zero-shot and few-shot baselines (0%). To our knowledge, this is the first empirical validation of LLMs’ feasibility and effectiveness for structured subtasks in quantum circuit compilation.

Technology Category

Application Category

📝 Abstract
We are in the midst of the noisy intermediate-scale quantum (NISQ) era, where quantum computers are limited by noisy gates, some of which are more error-prone than others and can render the final computation incomprehensible. Quantum circuit compilation algorithms attempt to minimize these noisy gates when mapping quantum algorithms onto quantum hardware but face computational challenges that restrict their application to circuits with no more than 5-6 qubits, necessitating the need to partition large circuits before the application of noisy quantum gate minimization algorithms. The existing generation of these algorithms is heuristic in nature and does not account for downstream gate minimization tasks. Large language models (LLMs) have the potential to change this and help improve quantum circuit partitions. This paper investigates the use of LLMs, such as Llama and Mistral, for partitioning quantum circuits by capitalizing on their abilities to understand and generate code, including QASM. Specifically, we teach LLMs to partition circuits using the quick partition approach of the Berkeley Quantum Synthesis Toolkit. Through experimental evaluations, we show that careful fine-tuning of open source LLMs enables us to obtain an accuracy of 53.4% for the partition task while over-the-shelf LLMs are unable to correctly partition circuits, using standard 1-shot and few-shot training approaches.
Problem

Research questions and friction points this paper is trying to address.

Partitioning large quantum circuits for NISQ era limitations
Improving quantum circuit compilation with LLM-based partitioning
Enhancing accuracy in circuit partitioning using fine-tuned LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs partition quantum circuits effectively
Fine-tuning enhances LLM accuracy to 53.4%
Utilizes quick partition approach from Berkeley Toolkit
🔎 Similar Papers
No similar papers found.
P
Pranav Sinha
Dept. of Computer Science, Oakland University, Rochester, MI, USA
Sumit Kumar Jha
Sumit Kumar Jha
University of Florida
Sunny Raj
Sunny Raj
Oakland University
Machine learning