๐ค AI Summary
This work addresses the high quantum overhead of classical data loading and the poor trainability of quantum machine learning models on near-term hardware by proposing a supervised learning framework based on encoding classical data into the ground states of k-local Hamiltonians. The approach compactly maps input data to low-energy eigenstates of parameterized Hamiltonians, which are efficiently approximated using sample-based Krylov quantum diagonalization. Shallow quantum circuits are then trained via local gradient-based optimization to prepare these states. By circumventing explicit data-encoding circuits, the method substantially reduces circuit depth, alleviates the data-loading bottleneck, and enhances model trainability. Empirical validation on up to 50 qubits using IBMโs Heron processor demonstrates the frameworkโs effectiveness and scalability on standard benchmark datasets.
๐ Abstract
Quantum computing has long promised transformative advances in data analysis, yet practical quantum machine learning has remained elusive due to fundamental obstacles such as a steep quantum cost for the loading of classical data and poor trainability of many quantum machine learning algorithms designed for near-term quantum hardware. In this work, we show that one can overcome these obstacles by using a linear Hamiltonian-based machine learning method which provides a compact quantum representation of classical data via ground state problems for k-local Hamiltonians. We use the recent sample-based Krylov quantum diagonalization method to compute low-energy states of the data Hamiltonians, whose parameters are trained to express classical datasets through local gradients. We demonstrate the efficacy and scalability of the methods by performing experiments on benchmark datasets using up to 50 qubits of an IBM Heron quantum processor.