🤖 AI Summary
研究通过量子Rényi散度改进量子学习算法的泛化误差上界,采用变分方法评估Petz和新引入的修正三明治量子Rényi散度,并证明后者性能更优。
📝 Abstract
This work advances the theoretical understanding of quantum learning by establishing a new family of upper bounds on the expected generalization error of quantum learning algorithms, leveraging the framework introduced by Caro et al. (2024) and a new definition for the expected true loss. Our primary contribution is the derivation of these bounds in terms of quantum and classical R'enyi divergences, utilizing a variational approach for evaluating quantum R'enyi divergences, specifically the Petz and a newly introduced modified sandwich quantum R'enyi divergence. Analytically and numerically, we demonstrate the superior performance of the bounds derived using the modified sandwich quantum R'enyi divergence compared to those based on the Petz divergence. Furthermore, we provide probabilistic generalization error bounds using two distinct techniques: one based on the modified sandwich quantum R'enyi divergence and classical R'enyi divergence, and another employing smooth max R'enyi divergence.