🤖 AI Summary
To address the challenges of scarce skin disease image data, severe class imbalance, and privacy constraints hindering model training, this paper proposes HQ-GAN—the first hybrid classical-quantum generative adversarial network capable of synthesizing high-fidelity color medical images. HQ-GAN innovatively integrates a classical deep convolutional GAN with a quantum generator via latent-space co-fusion, enabling end-to-end color dermatological image generation for the first time. It is experimentally validated on real, noisy quantum hardware. Compared to classical DCGANs and existing hybrid quantum GANs, HQ-GAN achieves superior image fidelity (e.g., higher FID and SSIM scores). When deployed for data augmentation, it significantly improves downstream classification accuracy—matching state-of-the-art classical generative methods—while reducing model parameters by over 25× and training epochs by 10×.
📝 Abstract
Machine learning-assisted diagnosis is gaining traction in skin disease detection, but training effective models requires large amounts of high-quality data. Skin disease datasets often suffer from class imbalance, privacy concerns, and object bias, making data augmentation essential. While classical generative models are widely used, they demand extensive computational resources and lengthy training time. Quantum computing offers a promising alternative, but existing quantum-based image generation methods can only yield grayscale low-quality images. Through a novel classical-quantum latent space fusion technique, our work overcomes this limitation and introduces the first classical-quantum generative adversarial network (GAN) capable of generating color medical images. Our model outperforms classical deep convolutional GANs and existing hybrid classical-quantum GANs in both image generation quality and classification performance boost when used as data augmentation. Moreover, the performance boost is comparable with that achieved using state-of-the-art classical generative models, yet with over 25 times fewer parameters and 10 times fewer training epochs. Such results suggest a promising future for quantum image generation as quantum hardware advances. Finally, we demonstrate the robust performance of our model on real IBM quantum machine with hardware noise.