LLM-Guided Ansätze Design for Quantum Circuit Born Machines in Financial Generative Modeling

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantum Circuit Born Machines (QCBMs) on Noisy Intermediate-Scale Quantum (NISQ) devices face a fundamental trade-off between expressive power and hardware feasibility in financial generative modeling. Method: We propose a large language model (LLM)-driven, hardware-aware ansatz design framework that integrates prompt engineering and iterative feedback to automatically generate shallow-depth, high-fidelity quantum circuits tailored to real-device constraints—including chip topology and noise profiles. Contribution/Results: Our approach incorporates KL-divergence-based evaluation, circuit depth optimization, and empirical validation on 12-qubit IBM quantum hardware, achieving superior generative performance and modeling accuracy over baseline methods. Crucially, this work introduces the first LLM-guided architecture search for QCBMs, enabling end-to-end, automated synthesis of deployable quantum generative models directly from hardware specifications.

Technology Category

Application Category

📝 Abstract
Quantum generative modeling using quantum circuit Born machines (QCBMs) shows promising potential for practical quantum advantage. However, discovering ansätze that are both expressive and hardware-efficient remains a key challenge, particularly on noisy intermediate-scale quantum (NISQ) devices. In this work, we introduce a prompt-based framework that leverages large language models (LLMs) to generate hardware-aware QCBM architectures. Prompts are conditioned on qubit connectivity, gate error rates, and hardware topology, while iterative feedback, including Kullback-Leibler (KL) divergence, circuit depth, and validity, is used to refine the circuits. We evaluate our method on a financial modeling task involving daily changes in Japanese government bond (JGB) interest rates. Our results show that the LLM-generated ansätze are significantly shallower and achieve superior generative performance compared to the standard baseline when executed on real IBM quantum hardware using 12 qubits. These findings demonstrate the practical utility of LLM-driven quantum architecture search and highlight a promising path toward robust, deployable generative models for near-term quantum devices.
Problem

Research questions and friction points this paper is trying to address.

Designing expressive and hardware-efficient quantum ansatze
Optimizing quantum circuits for noisy intermediate-scale devices
Generating robust quantum models for financial data simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-generated hardware-aware QCBM architectures
Iterative feedback refines circuits using KL divergence
Shallower ansatze achieve superior generative performance
🔎 Similar Papers
No similar papers found.