Adversarially robust quantum state learning and testing

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses learning and testing quantum states under adversarial noise—such as malicious tampering or readout errors—going beyond standard state preparation and measurement (SPAM) noise models. Method: We introduce the γ-adversarial contamination model—the first strictly stronger than SPAM—and design robust, single-copy, non-adaptive measurement algorithms for learning and testing low-rank quantum states. Contributions/Results: We prove an upper bound of $ ilde{O}(gammasqrt{r})$ on the trace distance error, matching the information-theoretic lower bound and thus achieving optimality. Crucially, we establish—for the first time—that low-rank quantum states admit dimension-independent robustness in high dimensions, whereas generic states are inherently fragile. Our analysis integrates trace-distance-based error control, information-theoretic lower-bound derivation, robust estimation techniques, and matrix perturbation theory.

Technology Category

Application Category

📝 Abstract
Quantum state learning is a fundamental problem in physics and computer science. As near-term quantum devices are error-prone, it is important to design error-resistant algorithms. Apart from device errors, other unexpected factors could also affect the algorithm, such as careless human read-out error, or even a malicious hacker deliberately altering the measurement results. Thus, we want our algorithm to work even in the worst case when things go against our favor. We consider the practical setting of single-copy measurements and propose the $γ$-adversarial corruption model where an imaginary adversary can arbitrarily change $γ$-fraction of the measurement outcomes. This is stronger than the $γ$-bounded SPAM noise model, where the post-measurement state changes by at most $γ$ in trace distance. Under our stronger model of corruption, we design an algorithm using non-adaptive measurements that can learn an unknown rank-$r$ state up to $ ilde{O}(γsqrt{r})$ in trace distance, provided that the number of copies is sufficiently large. We further prove an information-theoretic lower bound of $Ω(γsqrt{r})$ for non-adaptive measurements, demonstrating the optimality of our algorithm. Our upper and lower bounds also hold for quantum state testing, where the goal is to test whether an unknown state is equal to a given state or far from it. Our results are intriguingly optimistic and pessimistic at the same time. For general states, the error is dimension-dependent and $γsqrt{d}$ in the worst case, meaning that only corrupting a very small fraction ($1/sqrt{d}$) of the outcomes could totally destroy any non-adaptive learning algorithm. However, for constant-rank states that are useful in many quantum algorithms, it is possible to achieve dimension-independent error, even in the worst-case adversarial setting.
Problem

Research questions and friction points this paper is trying to address.

Designing adversarially robust quantum state learning algorithms
Handling worst-case corruption of measurement outcomes
Achieving optimal error bounds for constant-rank states
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-adaptive measurements for adversarial corruption
Optimal trace distance learning for rank-r states
Dimension-independent error for constant-rank states
🔎 Similar Papers
No similar papers found.