🤖 AI Summary
This work addresses the efficient computation of the measured relative entropy and measured Rényi relative entropy between quantum states ρ and σ. Leveraging their variational definitions, we confront a nonsmooth optimization problem; we establish, for the first time, that the associated objective function is β-smooth and γ-strongly convex/concave, with condition number governed by the max-relative entropy of ρ with respect to σ. Building on this, we propose a Nesterov-accelerated projected gradient method, integrating matrix calculus and variational characterizations for robust numerical evaluation. Compared to standard semidefinite programming approaches, our algorithm reduces memory complexity substantially and achieves significant speedups in well-conditioned regimes. It enables high-precision, large-scale computation of measured relative entropies, providing a scalable computational tool for quantum hypothesis testing and quantum information theory.
📝 Abstract
The measured relative entropy and measured Rényi relative entropy are quantifiers of the distinguishability of two quantum states $ρ$ and $σ$. They are defined as the maximum classical relative entropy or Rényi relative entropy realizable by performing a measurement on $ρ$ and $σ$, and they have interpretations in terms of asymptotic quantum hypothesis testing. Crucially, they can be rewritten in terms of variational formulas involving the optimization of a concave or convex objective function over the set of positive definite operators. In this paper, we establish foundational properties of these objective functions by analyzing their matrix gradients and Hessian superoperators; namely, we prove that these objective functions are $β$-smooth and $γ$-strongly convex / concave, where $β$ and $γ$ depend on the max-relative entropies of $ρ$ and $σ$. A practical consequence of these properties is that we can conduct Nesterov accelerated projected gradient descent / ascent, a well known classical optimization technique, to calculate the measured relative entropy and measured Rényi relative entropy to arbitrary precision. These algorithms are generally more memory efficient than our previous algorithms based on semi-definite optimization [Huang and Wilde, arXiv:2406.19060], and for well conditioned states $ρ$ and $σ$, these algorithms are notably faster.