🤖 AI Summary
Variational quantum algorithms (VQAs) suffer from optimization difficulties on NISQ devices due to poor parameter initialization. Existing machine learning–based initialization methods exhibit limited generalizability, relying heavily on single-task and small-sample settings. This work pioneers a generative learning perspective for VQA parameter initialization and proposes DiffQ—a unified initialization framework grounded in denoising diffusion probabilistic models (DDPMs). To overcome the constraints of small-scale, task-specific benchmarks, we construct the first large-scale, cross-domain benchmark dataset comprising 15,085 instances spanning three quantum domains and five distinct VQA tasks. Extensive experiments demonstrate that DiffQ substantially improves training efficiency: it reduces initial loss by up to 8.95× and decreases convergence steps by up to 23.4%, consistently outperforming all existing baselines across diverse VQA applications.
📝 Abstract
Variational Quantum Algorithms (VQAs) are widely used in the noisy intermediate-scale quantum (NISQ) era, but their trainability and performance depend critically on initialization parameters that shape the optimization landscape. Existing machine learning-based initializers achieve state-of-the-art results yet remain constrained to single-task domains and small datasets of only hundreds of samples. We address these limitations by reformulating VQA parameter initialization as a generative modeling problem and introducing DiffQ, a parameter initializer based on the Denoising Diffusion Probabilistic Model (DDPM). To support robust training and evaluation, we construct a dataset of 15,085 instances spanning three domains and five representative tasks. Experiments demonstrate that DiffQ surpasses baselines, reducing initial loss by up to 8.95 and convergence steps by up to 23.4%.