Photonic Quantum-Enhanced Knowledge Distillation

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a hybrid quantum photonic–classical framework that leverages hardware-native randomness to enhance knowledge distillation efficiency and reduce student network complexity. For the first time, intrinsic randomness from programmable photonic circuits is integrated into knowledge distillation, guiding the student model via dictionary-based convolution and a gradient-free photonic parameter update mechanism. Coupled with exponential moving average feature smoothing, the approach achieves an excellent trade-off between model compression and accuracy on MNIST, Fashion-MNIST, and CIFAR-10 benchmarks—nearly matching teacher performance even under aggressive compression. Moreover, the observed performance degradation due to limited sampling follows shot-noise scaling, confirming the physical plausibility and practical viability of the proposed method.

Technology Category

Application Category

📝 Abstract
Photonic quantum processors naturally produce intrinsically stochastic measurement outcomes, offering a hardware-native source of structured randomness that can be exploited during machine-learning training. Here we introduce Photonic Quantum-Enhanced Knowledge Distillation (PQKD), a hybrid quantum photonic--classical framework in which a programmable photonic circuit generates a compact conditioning signal that constrains and guides a parameter-efficient student network during distillation from a high-capacity teacher. PQKD replaces fully trainable convolutional kernels with dictionary convolutions: each layer learns only a small set of shared spatial basis filters, while sample-dependent channel-mixing weights are derived from shot-limited photonic features and mapped through a fixed linear transform. Training alternates between standard gradient-based optimisation of the student and sampling-robust, gradient-free updates of photonic parameters, avoiding differentiation through photonic hardware. Across MNIST, Fashion-MNIST and CIFAR-10, PQKD traces a controllable compression--accuracy frontier, remaining close to teacher performance on simpler benchmarks under aggressive convolutional compression. Performance degrades predictably with finite sampling, consistent with shot-noise scaling, and exponential moving-average feature smoothing suppresses high-frequency shot-noise fluctuations, extending the practical operating regime at moderate shot budgets.
Problem

Research questions and friction points this paper is trying to address.

quantum-enhanced
knowledge distillation
photonic processors
model compression
structured randomness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Photonic Quantum Processing
Knowledge Distillation
Dictionary Convolution
Shot-Noise Robustness
Hybrid Quantum-Classical Learning
🔎 Similar Papers
No similar papers found.
K
Kuan-Cheng Chen
Department of Electrical and Electronic Engineering, Imperial College London, South Kensington, London, SW7 2AZ, England, United Kingdom.
Shang Yu
Shang Yu
Research Associate, Imperial College London
Quantum PhotonicsQuantum ComputingQuantum InformationQuantum Machine Learning
Chen-Yu Liu
Chen-Yu Liu
Research Scientist at Quantinuum
Quantum ComputingQuantum many body physicsArtificial intelligenceGeneral relativity
Samuel Yen-Chi Chen
Samuel Yen-Chi Chen
Wells Fargo
quantum computationquantum informationmachine learningquantum machine learning
Huan-Hsin Tseng
Huan-Hsin Tseng
Brookhaven National Laboratory
Quantum ComputingMachine LearningMathematical PhysicsGeneral RelativityGauge Theories
Y
Yen Jui Chang
Quantum Information Center, Chung Yuan Christian University, No. 200, Zhongbei Rd., Zhongli Dist., Taoyuan City, 320314, Taiwan.
W
Wei-Hao Huang
JIJ, Rutherford Appleton Laboratory, Harwell Campus, Didcot, OX11 0QX, United Kingdom.
Felix Burt
Felix Burt
PhD Student, Imperial College London
quantum computingdistributed quantum computingquantum networks
E
Esperanza Cuenca Gomez
NVIDIA Corporation, Santa Clara, CA, USA.
Zohim Chandani
Zohim Chandani
NVIDIA
Quantum Computing
W
William Clements
ORCA Computing, London, United Kingdom.
I
Ian Walmsley
Blackett Laboratory, Department of Physics, Imperial College London, South Kensington, London, SW7 2AZ, England, United Kingdom.
Kin K. Leung
Kin K. Leung
Tanaka Chair Professor, Imperial College
Communications networkswireless and Internet technologiesmachine learning and optimization