Training-efficient density quantum machine learning

📅 2024-05-30
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Quantum machine learning urgently requires models that simultaneously achieve high expressivity and hardware-efficient trainability. To address this, we propose the Density Quantum Neural Network (DQNN), introducing a unified modeling paradigm based on density matrices. Our method leverages the Hastings–Campbell mixing lemma to construct shallow yet high-performance quantum circuits; employs commuting-generator parameterization to enable efficient analytical gradient computation; and integrates linear combinations of unitaries (LCU), mixture-of-experts architectures, Hamiltonian-weight conservation, and equivariance constraints. This framework unifies post-variational optimization and measurement-basis learning. Experiments demonstrate that DQNN significantly improves training efficiency and generalization across diverse tasks—including equivariant and Hamming-weight-conserving models—while effectively mitigating overfitting and substantially reducing circuit depth.

Technology Category

Application Category

📝 Abstract
Quantum machine learning (QML) requires powerful, flexible and efficiently trainable models to be successful in solving challenging problems. We introduce density quantum neural networks, a model family that prepares mixtures of trainable unitaries, with a distributional constraint over coefficients. This framework balances expressivity and efficient trainability, especially on quantum hardware. For expressivity, the Hastings-Campbell Mixing lemma converts benefits from linear combination of unitaries into density models with similar performance guarantees but shallower circuits. For trainability, commuting-generator circuits enable density model construction with efficiently extractable gradients. The framework connects to various facets of QML including post-variational and measurement-based learning. In classical settings, density models naturally integrate the mixture of experts formalism, and offer natural overfitting mitigation. The framework is versatile - we uplift several quantum models into density versions to improve model performance, or trainability, or both. These include Hamming weight-preserving and equivariant models, among others. Extensive numerical experiments validate our findings.
Problem

Research questions and friction points this paper is trying to address.

Balancing expressivity and efficient trainability in QML
Enhancing quantum models with density neural networks
Mitigating overfitting in classical and quantum settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Density quantum neural networks with trainable unitaries
Hastings-Campbell Mixing lemma for shallower circuits
Commuting-generator circuits enable efficient gradient extraction
🔎 Similar Papers
No similar papers found.
Brian Coyle
Brian Coyle
Fujitsu Research of Europe
Quantum computingQuantum machine learningQuantum cryptography
E
El Amine Cherrat
QC Ware, Palo Alto, USA and Paris France.
Nishant Jain
Nishant Jain
Yale University Department of Computer Science
Distributed SystemsMobile ComputingTranscriptomicsMobile Medical Technology
N
Natansh Mathur
QC Ware, Palo Alto, USA and Paris France.; IRIF, CNRS - University of Paris, France.
S
Snehal Raj
QC Ware, Palo Alto, USA and Paris France.
S
Skander Kazdaghli
QC Ware, Palo Alto, USA and Paris France.
Iordanis Kerenidis
Iordanis Kerenidis
CNRS and Quantum Signals
Quantum Computation and Communication