Dataset Distillation for Quantum Neural Networks

📅 2025-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantum neural networks (QNNs) suffer from high classical data dependency, leading to excessive quantum circuit executions and prohibitive training overhead. To address this, we propose a quantum-aware data distillation framework tailored for QNNs. Our method introduces a novel quantum LeNet architecture featuring trainable Hermitian observables and residual connections—first of their kind in QNN design—and a non-trainable Hermitian-based distillation mechanism that enhances training stability without compromising accuracy. Evaluated on MNIST and CIFAR-10, the distilled QNN achieves inference accuracies of 91.9% and 50.3%, respectively—within 1.8% and 1.3% of classical LeNet performance—while drastically reducing quantum circuit invocation counts and overall training cost.

Technology Category

Application Category

📝 Abstract
Training Quantum Neural Networks (QNNs) on large amount of classical data can be both time consuming as well as expensive. Higher amount of training data would require higher number of gradient descent steps to reach convergence. This, in turn would imply that the QNN will require higher number of quantum executions, thereby driving up its overall execution cost. In this work, we propose performing the dataset distillation process for QNNs, where we use a novel quantum variant of classical LeNet model containing residual connection and trainable Hermitian observable in the Parametric Quantum Circuit (PQC) of the QNN. This approach yields highly informative yet small number of training data at similar performance as the original data. We perform distillation for MNIST and Cifar-10 datasets, and on comparison with classical models observe that both the datasets yield reasonably similar post-inferencing accuracy on quantum LeNet (91.9% MNIST, 50.3% Cifar-10) compared to classical LeNet (94% MNIST, 54% Cifar-10). We also introduce a non-trainable Hermitian for ensuring stability in the distillation process and note marginal reduction of up to 1.8% (1.3%) for MNIST (Cifar-10) dataset.
Problem

Research questions and friction points this paper is trying to address.

Reducing QNN training cost by distilling large datasets
Achieving similar accuracy with fewer quantum executions
Ensuring stability in distillation with Hermitian observables
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum LeNet with residual connections
Trainable Hermitian observable in PQC
Non-trainable Hermitian for stability
🔎 Similar Papers
No similar papers found.