🤖 AI Summary
This work addresses the open question of whether quantum Gaussian process regression (GPR) algorithms achieve the theoretically predicted exponential quantum speedup. Method: We conduct a systematic assessment by integrating random matrix theory, functional analysis, and large-scale numerical experiments across standard kernels—including RBF and Matérn—under both quantum data loading and dequantization frameworks. Contribution/Results: We rigorously prove that, for these kernels, the condition number, sparsity, and Frobenius norm of the kernel matrix all scale linearly with dataset size $N$, thereby invalidating the exponential-speedup assumptions underlying multiple quantum GPR proposals. Consequently, any quantum GPR algorithm relying on kernel matrix inversion or sampling exhibits a fundamental complexity lower bound of $Omega(N^2)$. Numerical validation on diverse real-world and synthetic datasets confirms robustness across implementation paradigms. This is the first work to establish a universal, theoretically grounded negation of exponential speedup for quantum GPR.
📝 Abstract
Gaussian Process Regression is a well-known machine learning technique for which several quantum algorithms have been proposed. We show here that in a wide range of scenarios these algorithms show no exponential speedup. We achieve this by rigorously proving that the condition number of a kernel matrix scales at least linearly with the matrix size under general assumptions on the data and kernel. We additionally prove that the sparsity and Frobenius norm of a kernel matrix scale linearly under similar assumptions. The implications for the quantum algorithms runtime are independent of the complexity of loading classical data on a quantum computer and also apply to dequantised algorithms. We supplement our theoretical analysis with numerical verification for popular kernels in machine learning.