🤖 AI Summary
This work investigates the optimization landscape of IQP quantum circuit Born machines under the maximum mean discrepancy loss and demonstrates that random initialization leads to barren plateaus, severely hindering trainability. The study establishes, for the first time, that fully random angle initialization induces exponentially vanishing gradients and derives a rigorous lower bound on the loss variance. To overcome this challenge, the authors propose two novel initialization strategies: one informed by the data distribution and another data-agnostic approach that introduces a bias term; both guarantee non-vanishing gradients and accelerate convergence. Empirical validation on a 150-qubit genomic generative model confirms that the proposed methods significantly enhance both convergence speed and generative performance.
📝 Abstract
Quantum circuit Born machines based on instantaneous quantum polynomial-time (IQP) circuits are natural candidates for quantum generative modeling, both because of their probabilistic structure and because IQP sampling is provably classically hard in certain regimes. Recent proposals focus on training IQP-QCBMs using Maximum Mean Discrepancy (MMD) losses built from low-body Pauli-$Z$ correlators, but the effect of initialization on the resulting optimization landscape remains poorly understood. In this work, we address this by first proving that the MMD loss landscape suffers from barren plateaus for random full-angle-range initializations of IQP circuits. We then establish lower bounds on the loss variance for identity and an unbiased data-agnostic initialization. We then additionally consider a data-dependent initialization that is better aligned with the target distribution and, under suitable assumptions, yields provable gradients and generally converges quicker to a good minimum (as indicated by our training of circuits with 150 qubits on genomic data). Finally, as a by-product, the developed variance lower bound framework is applicable to a general class of non-linear losses, offering a broader toolset for analyzing warm-starts in quantum machine learning.