Lower Bounds on Learning Pauli Channels With Individual Measurements

📅 2023-01-22
🏛️ IEEE Transactions on Information Theory
📈 Citations: 13
Influential: 1
📄 PDF
🤖 AI Summary
This work establishes fundamental lower bounds on the sample complexity of learning an $n$-qubit Pauli channel, under the constraints of no auxiliary entanglement, single-qubit measurements only, non-reusability of the channel, and accuracy measured in diamond norm error $varepsilon$. We analyze both non-adaptive and adaptive strategies. Technically, our approach integrates quantum information theory, diamond norm analysis, structural characterization of Pauli channels, and probabilistic methods. We derive the first tight lower bounds: $Omega(2^{3n}varepsilon^{-2})$ for non-adaptive learning and $Omega(2^{2.5n}varepsilon^{-2})$ for adaptive learning. These results demonstrate that the Flammia–Wallman algorithm is nearly optimal in the non-adaptive setting. Our bounds reveal the intrinsic hardness of Pauli noise learning and establish rigorous theoretical limits for quantum hardware characterization and randomized compiling protocols.
📝 Abstract
Understanding the noise affecting a quantum device is of fundamental importance for scaling quantum technologies. A particularly important class of noise models is that of Pauli channels, as randomized compiling techniques can effectively bring any quantum channel to this form and are significantly more structured than general quantum channels. In this paper, we show fundamental lower bounds on the sample complexity for learning Pauli channels in diamond norm. We consider strategies that may not use auxiliary systems entangled with the input to the unknown channel and have to perform a measurement before reusing the channel. For non-adaptive algorithms, we show a lower bound of <inline-formula> <tex-math notation="LaTeX">$Omega (2^{3n}varepsilon ^{-2})$ </tex-math></inline-formula> to learn an n-qubit Pauli channel. In particular, this shows that the recently introduced learning procedure by Flammia and Wallman (2020) is essentially optimal. In the adaptive setting, we show a lower bound of <inline-formula> <tex-math notation="LaTeX">$Omega (2^{2.5n}varepsilon ^{-2})$ </tex-math></inline-formula> for <inline-formula> <tex-math notation="LaTeX">$varepsilon ={mathcal {O}}(2^{-n})$ </tex-math></inline-formula>, and a lower bound of <inline-formula> <tex-math notation="LaTeX">$Omega (2^{2n}varepsilon ^{-2})$ </tex-math></inline-formula> for any <inline-formula> <tex-math notation="LaTeX">$varepsilon gt 0$ </tex-math></inline-formula>. This last lower bound holds even in a stronger model where in each step, before performing the measurement, the unknown channel may be used arbitrarily many times sequentially interspersed with unital operations.
Problem

Research questions and friction points this paper is trying to address.

Estimate sample complexity for learning Pauli channels
Prove lower bounds for non-adaptive Pauli channel learning
Establish adaptive learning bounds for quantum noise models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lower bounds on learning Pauli channels
Non-adaptive algorithms sample complexity
Adaptive setting stronger model constraints
🔎 Similar Papers
No similar papers found.
Omar Fawzi
Omar Fawzi
Inria, ENS Lyon
Quantum information theoryTheoretical Computer Science
A
Aadil Oufkir
Univ Lyon, Inria, ENS Lyon, UCBL, LIP, Lyon, France
D
Daniel Stilck França
Univ Lyon, Inria, ENS Lyon, UCBL, LIP, Lyon, France