🤖 AI Summary
This work investigates the security bounds of randomness extraction against quantum side information in the single-shot setting, i.e., quantum privacy amplification. By introducing a novel quantum smooth conditional entropy derived from a measurement-based classical smooth divergence and proposing a variational formulation of smooth Rényi relative entropy that incorporates smoothing over non-positive Hermitian operators, the authors establish tighter leftover hash lemmas and decoupling bounds. This approach yields, for the first time, an optimal second-order asymptotic expansion valid for all hash functions, significantly improving upon existing smooth min-entropy bounds. The method simultaneously achieves the tightest known single-shot achievability and converse optimality guarantees under trace distance, while also recovering the optimal achievability result in the classical setting.
📝 Abstract
We introduce an improved one-shot characterisation of randomness extraction against quantum side information (privacy amplification), strengthening known one-shot bounds and providing a unified derivation of the tightest known asymptotic constraints. Our main tool is a new class of smooth conditional entropies defined by lifting classical smooth divergences through measurements. A key role is played by the measured smooth R\'enyi relative entropy of order 2, which we show to admit an equivalent variational form: it can be understood as allowing for smoothing over not only states, but also non-positive Hermitian operators. Building on this, we establish a tightened leftover hash lemma, significantly improving over all known smooth min-entropy bounds on extractable randomness and recovering the sharpest classical achievability results. We extend these methods to decoupling, the coherent analogue of privacy amplification, obtaining a corresponding improved one-shot bound. Relaxing our smooth entropy bounds leads to one-shot achievability results in terms of measured R\'enyi divergences, tightening the bounds of [Dupuis, arXiv:2105.05342] and recovering the state-of-the-art asymptotic i.i.d. error exponents shown there. We show an approximate optimality of our results by giving a matching one-shot converse bound up to additive logarithmic terms. This yields an optimal second-order asymptotic expansion of privacy amplification under trace distance, establishing a significantly tighter one-shot achievability result than previously shown in [Shen et al., arXiv:2202.11590] and proving its optimality for all hash functions.