On the relation between trainability and dequantization of variational quantum learning models

📅 2024-06-11
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the compatibility between trainability and non-dequantizability in variational quantum machine learning (VQML), challenging the common misconception that these properties are mutually exclusive. We first formalize both concepts from a machine learning operational perspective. Theoretically, we prove their strict coexistence under two conditions: (i) non-degenerate gradient information and (ii) high-dimensional entanglement expressivity of the quantum kernel. Methodologically, we propose a “variational-degree”-graded parametrized quantum circuit (PQC) design paradigm, unifying hardware-efficient ansätze with information-geometric principles. Experimentally, we validate multiple models simultaneously achieving trainability and non-dequantizability across diverse tasks. Our work establishes the first universal compatibility criterion for VQML, providing a theoretically rigorous yet engineering-practical design framework for scalable, quantum-advantageous learning models.

Technology Category

Application Category

📝 Abstract
The quest for successful variational quantum machine learning (QML) relies on the design of suitable parametrized quantum circuits (PQCs), as analogues to neural networks in classical machine learning. Successful QML models must fulfill the properties of trainability and non-dequantization, among others. Recent works have highlighted an intricate interplay between trainability and dequantization of such models, which is still unresolved. In this work we contribute to this debate from the perspective of machine learning, proving a number of results identifying, among others when trainability and non-dequantization are not mutually exclusive. We begin by providing a number of new somewhat broader definitions of the relevant concepts, compared to what is found in other literature, which are operationally motivated, and consistent with prior art. With these precise definitions given and motivated, we then study the relation between trainability and dequantization of variational QML. Next, we also discuss the degrees of"variationalness"of QML models, where we distinguish between models like the hardware efficient ansatz and quantum kernel methods. Finally, we introduce recipes for building PQC-based QML models which are both trainable and nondequantizable, and corresponding to different degrees of variationalness. We do not address the practical utility for such models. Our work however does point toward a way forward for finding more general constructions, for which finding applications may become feasible.
Problem

Research questions and friction points this paper is trying to address.

Trainability and dequantization in quantum models.
Design of parametrized quantum circuits.
Exploring variational quantum machine learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parametrized quantum circuits design
Trainability and non-dequantization analysis
Variationalness degrees in QML models
🔎 Similar Papers
No similar papers found.
E
Elies Gil-Fuster
Dahlem Center for Complex Quantum Systems, Freie Universität Berlin, 14195 Berlin, Germany; Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany
C
Casper Gyurik
⟨aQaL⟩ Applied Quantum Algorithms, Universiteit Leiden
Adrián Pérez-Salinas
Adrián Pérez-Salinas
Institute for Theoretical Physics, ETH Zürich
quantum computingmachine learningentanglementquantum information
Vedran Dunjko
Vedran Dunjko
Leiden University
Quantum computingAI