Distributed Quantum Neural Networks on Distributed Photonic Quantum Computing

๐Ÿ“… 2025-05-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the inefficiency of parameter training and poor classical-quantum integration in photonic quantum neural networks (QNNs) for distributed photonic quantum computing, this paper proposes a photonic quantum-classical hybrid training framework. It leverages photonic QNNs to generate high-dimensional probability distributions, which are then compressed into compact classical weights via matrix product states (MPS), enabling end-to-end differentiable training. Crucially, quantum hardware is unnecessary during inference, preserving quantum representational power while ensuring classical deployability. The method integrates universal linear optical interferometer decomposition, photon-counting statistical modeling, and noise-robust simulation. On MNIST, it achieves 95.50% accuracy with only 3,292 parametersโ€”over 2ร— more parameter-efficient than comparable classical baselines, with <3% accuracy degradation. Ablation studies and noise-resilient simulations validate both feasibility and the essential role of quantum components.

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce a distributed quantum-classical framework that synergizes photonic quantum neural networks (QNNs) with matrix-product-state (MPS) mapping to achieve parameter-efficient training of classical neural networks. By leveraging universal linear-optical decompositions of $M$-mode interferometers and photon-counting measurement statistics, our architecture generates neural parameters through a hybrid quantum-classical workflow: photonic QNNs with $M(M+1)/2$ trainable parameters produce high-dimensional probability distributions that are mapped to classical network weights via an MPS model with bond dimension $chi$. Empirical validation on MNIST classification demonstrates that photonic QT achieves an accuracy of $95.50% pm 0.84%$ using 3,292 parameters ($chi = 10$), compared to $96.89% pm 0.31%$ for classical baselines with 6,690 parameters. Moreover, a ten-fold compression ratio is achieved at $chi = 4$, with a relative accuracy loss of less than $3%$. The framework outperforms classical compression techniques (weight sharing/pruning) by 6--12% absolute accuracy while eliminating quantum hardware requirements during inference through classical deployment of compressed parameters. Simulations incorporating realistic photonic noise demonstrate the framework's robustness to near-term hardware imperfections. Ablation studies confirm quantum necessity: replacing photonic QNNs with random inputs collapses accuracy to chance level ($10.0% pm 0.5%$). Photonic quantum computing's room-temperature operation, inherent scalability through spatial-mode multiplexing, and HPC-integrated architecture establish a practical pathway for distributed quantum machine learning, combining the expressivity of photonic Hilbert spaces with the deployability of classical neural networks.
Problem

Research questions and friction points this paper is trying to address.

Achieving parameter-efficient training of classical neural networks using photonic QNNs and MPS mapping.
Demonstrating quantum advantage in neural network compression and accuracy over classical methods.
Validating robustness of distributed quantum-classical framework to realistic photonic noise.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed quantum-classical framework with photonic QNNs
Matrix-product-state mapping for parameter-efficient training
Hybrid quantum-classical workflow for neural parameters
๐Ÿ”Ž Similar Papers
No similar papers found.