Entanglement-induced provable and robust quantum learning advantages

📅 2024-10-04
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work rigorously establishes quantum models’ unconditional advantages over classical models in expressive power, inference speed, and training efficiency—particularly under noise. Addressing classical models’ limitations in nonlocal machine learning tasks—including weak expressivity, slow inference, and susceptibility to overfitting—we propose a robust learning framework grounded in quantum entanglement. Using information-theoretic techniques, we provide the first rigorous proof that entanglement reduces communication complexity, yielding a provable advantage: constant-parameter quantum models outperform linear-parameter classical models. Theoretically, our framework achieves constant training time, sample complexity inversely proportional to problem size, and robustness against depolarizing noise. We validate the quantum–classical learning separation via variational quantum circuit design, numerical simulations, and experiments on the IonQ Aria trapped-ion quantum processor, demonstrating the advantage on small-scale systems.

Technology Category

Application Category

📝 Abstract
Quantum computing holds the unparalleled potentials to enhance, speed up or innovate machine learning. However, an unambiguous demonstration of quantum learning advantage has not been achieved so far. Here, we rigorously establish a noise-robust, unconditional quantum learning advantage in terms of expressivity, inference speed, and training efficiency, compared to commonly-used classical machine learning models. Our proof is information-theoretic and pinpoints the origin of this advantage: quantum entanglement can be used to reduce the communication required by non-local machine learning tasks. In particular, we design a fully classical task that can be solved with unit accuracy by a quantum model with a constant number of variational parameters using entanglement resources, whereas commonly-used classical models must scale at least linearly with the size of the task to achieve a larger-than-exponentially-small accuracy. We further show that the quantum model can be trained with constant time and a number of samples inversely proportional to the problem size. We prove that this advantage is robust against constant depolarization noise. We show through numerical simulations that even though the classical models can have improved performance as their sizes are increased, they would suffer from overfitting. The constant-versus-linear separation, bolstered by the overfitting problem, makes it possible to demonstrate the quantum advantage with relatively small system sizes. We demonstrate, through both numerical simulations and trapped-ion experiments on IonQ Aria, the desired quantum-classical learning separation. Our results provide a valuable guide for demonstrating quantum learning advantages in practical applications with current noisy intermediate-scale quantum devices.
Problem

Research questions and friction points this paper is trying to address.

Demonstrating quantum learning advantage using entanglement
Proving noise-robust quantum superiority in speed and efficiency
Reducing communication needs in non-local tasks via entanglement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum models use entanglement for communication reduction
Trainable quantum models with constant resources
Robust quantum models against constant noise
🔎 Similar Papers
No similar papers found.
H
Haimeng Zhao
Center for Quantum Information, IIIS, Tsinghua University, Beijing 100084, China; Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, CA 91125, USA
D
Dong-Ling Deng
Center for Quantum Information, IIIS, Tsinghua University, Beijing 100084, China; Hefei National Laboratory, Hefei 230088, China; Shanghai Qi Zhi Institute, Shanghai 200232, China