Instance-Optimal Matrix Multiplicative Weight Update and Its Quantum Applications

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the matrix-valued learning from expert advice (LEA) problem and proposes the first instance-optimal Matrix Multiplicative Weights Update (MMWU) algorithm. The algorithm achieves the optimal regret bound in quantum relative entropy while maintaining computational complexity identical to standard MMWU. Methodologically, it introduces a general analytical framework based on matrix potential functions, establishes a novel one-sided Jensen trace inequality, and integrates an optimal surrogate function derived from the error function—achieving instance optimality with zero additional computational overhead. The approach unifies Laplace transform techniques, quantum relative entropy analysis, and linearized convex loss extensions, enabling application to nonlinear quantum property prediction. Experiments demonstrate significant improvements over existing methods in depolarizing noise modeling, random quantum state learning, and Gibbs state estimation, successfully and efficiently predicting key quantum quantities—including purity, quantum virtual cooling, and Rényi-2 entanglement entropy.

Technology Category

Application Category

📝 Abstract
The Matrix Multiplicative Weight Update (MMWU) is a seminal online learning algorithm with numerous applications. Applied to the matrix version of the Learning from Expert Advice (LEA) problem on the $d$-dimensional spectraplex, it is well known that MMWU achieves the minimax-optimal regret bound of $O(sqrt{Tlog d})$, where $T$ is the time horizon. In this paper, we present an improved algorithm achieving the instance-optimal regret bound of $O(sqrt{Tcdot S(X||d^{-1}I_d)})$, where $X$ is the comparator in the regret, $I_d$ is the identity matrix, and $S(cdot||cdot)$ denotes the quantum relative entropy. Furthermore, our algorithm has the same computational complexity as MMWU, indicating that the improvement in the regret bound is ``free''. Technically, we first develop a general potential-based framework for matrix LEA, with MMWU being its special case induced by the standard exponential potential. Then, the crux of our analysis is a new ``one-sided'' Jensen's trace inequality built on a Laplace transform technique, which allows the application of general potential functions beyond exponential to matrix LEA. Our algorithm is finally induced by an optimal potential function from the vector LEA problem, based on the imaginary error function. Complementing the above, we provide a memory lower bound for matrix LEA, and explore the applications of our algorithm in quantum learning theory. We show that it outperforms the state of the art for learning quantum states corrupted by depolarization noise, random quantum states, and Gibbs states. In addition, applying our algorithm to linearized convex losses enables predicting nonlinear quantum properties, such as purity, quantum virtual cooling, and Rényi-$2$ correlation.
Problem

Research questions and friction points this paper is trying to address.

Achieving instance-optimal regret bound for matrix multiplicative weight update
Improving quantum state learning under various noise conditions
Enabling prediction of nonlinear quantum properties via convex optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Instance-optimal regret bound using quantum relative entropy
Same computational complexity as standard MMWU algorithm
Novel one-sided Jensen's trace inequality technique
🔎 Similar Papers