On the average-case hardness of BosonSampling

📅 2024-11-07
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the average-case classical sampling hardness of BosonSampling and random circuit sampling—specifically, how #P-hardness of output probability computation implies intractability of sampling itself. Existing results only establish weak average-case hardness. Method: We significantly strengthen the additive-error #P-hardness bound for the Gaussian permanent, improving it from O(log n) to e⁻ⁿˡᵒᵍⁿ⁻ⁿ⁻ᴼ(ⁿ^δ) for any δ > 0. Under the anti-concentration conjecture, we further prove that multiplicative-error random BosonSampling cannot be efficiently classically simulated unless the polynomial hierarchy collapses. Contribution/Results: This is the first result to achieve such a strong exponential additive-error hardness bound for the Gaussian permanent and the first to establish average-case multiplicative-error hardness for BosonSampling under a standard complexity-theoretic assumption. Our findings provide the strongest迄今 (to date) average-case theoretical evidence for near-term quantum advantage.

Technology Category

Application Category

📝 Abstract
BosonSampling is a popular candidate for near-term quantum advantage, which has now been experimentally implemented several times. The original proposal of Aaronson and Arkhipov from 2011 showed that classical hardness of BosonSampling is implied by a proof of the"Gaussian Permanent Estimation"conjecture. This conjecture states that $e^{-nlog{n}-n-O(log n)}$ additive error estimates to the output probability of most random BosonSampling experiments are $#P$-hard. Proving this conjecture has since become the central question in the theory of quantum advantage. In this work we make progress by proving that $e^{-nlog n -n - O(n^delta)}$ additive error estimates to output probabilities of most random BosonSampling experiments are $#P$-hard, for any $delta>0$. In the process, we circumvent all known barrier results for proving the hardness of BosonSampling experiments. This is nearly the robustness needed to prove hardness of BosonSampling -- the remaining hurdle is now"merely"to show that the $n^delta$ in the exponent can be improved to $O(log n).$ We also obtain an analogous result for Random Circuit Sampling. Our result allows us to show, for the first time, a hardness of classical sampling result for random BosonSampling experiments, under an anticoncentration conjecture. Specifically, we prove the impossibility of multiplicative-error sampling from random BosonSampling experiments with probability $1-e^{-O(n)}$, unless the Polynomial Hierarchy collapses.
Problem

Research questions and friction points this paper is trying to address.

Exponentially improving average-case hardness of BosonSampling experiments
Proving #P-hardness for additive-error estimates of output probabilities
Establishing classical sampling hardness under anticoncentration conjecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exponential improvement in additive-error hardness proofs
New proof techniques tolerate exponential loss reduction
First average-case classical sampling hardness result
🔎 Similar Papers
No similar papers found.
Adam Bouland
Adam Bouland
Department of Computer Science, Stanford University
I
Ishaun Datta
Department of Computer Science, Stanford University
Bill Fefferman
Bill Fefferman
University of Chicago
Quantum Computing
F
Felipe Hernandez
Department of Mathematics, MIT