Assessing Quantum Advantage for Gaussian Process Regression

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the open question of whether quantum Gaussian process regression (GPR) algorithms achieve the theoretically predicted exponential quantum speedup. Method: We conduct a systematic assessment by integrating random matrix theory, functional analysis, and large-scale numerical experiments across standard kernels—including RBF and Matérn—under both quantum data loading and dequantization frameworks. Contribution/Results: We rigorously prove that, for these kernels, the condition number, sparsity, and Frobenius norm of the kernel matrix all scale linearly with dataset size $N$, thereby invalidating the exponential-speedup assumptions underlying multiple quantum GPR proposals. Consequently, any quantum GPR algorithm relying on kernel matrix inversion or sampling exhibits a fundamental complexity lower bound of $Omega(N^2)$. Numerical validation on diverse real-world and synthetic datasets confirms robustness across implementation paradigms. This is the first work to establish a universal, theoretically grounded negation of exponential speedup for quantum GPR.

Technology Category

Application Category

📝 Abstract
Gaussian Process Regression is a well-known machine learning technique for which several quantum algorithms have been proposed. We show here that in a wide range of scenarios these algorithms show no exponential speedup. We achieve this by rigorously proving that the condition number of a kernel matrix scales at least linearly with the matrix size under general assumptions on the data and kernel. We additionally prove that the sparsity and Frobenius norm of a kernel matrix scale linearly under similar assumptions. The implications for the quantum algorithms runtime are independent of the complexity of loading classical data on a quantum computer and also apply to dequantised algorithms. We supplement our theoretical analysis with numerical verification for popular kernels in machine learning.
Problem

Research questions and friction points this paper is trying to address.

Assessing quantum advantage in Gaussian Process Regression
Proving no exponential speedup in quantum algorithms
Analyzing kernel matrix properties under general assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proving linear kernel matrix condition number scaling
Establishing linear sparsity and Frobenius norm scaling
Numerically verifying popular machine learning kernels
🔎 Similar Papers
No similar papers found.
D
Dominic Lowe
Blackett Laboratory, Imperial College London
M
M. S. Kim
Blackett Laboratory, Imperial College London
Roberto Bondesan
Roberto Bondesan
Imperial College London
Quantum ComputingMachine learning