🤖 AI Summary
To address the instability in training quantum Boltzmann machines (QBMs) caused by operator non-commutativity and gradient vanishing in gradient-based optimization, this paper introduces the first quantum analog of the expectation-maximization (EM) algorithm—termed quantum EM (QEM)—grounded in quantum information geometry. QEM circumvents gradient computation entirely, deriving closed-form parameter update rules that preserve quantum expressivity while significantly enhancing optimization stability and convergence speed. Technically, it employs a semi-quantum restricted Boltzmann machine architecture—quantizing only the hidden layer while retaining classical visible units—and integrates variational inference with quantum Fisher information metric for efficient learning. Experiments across multiple benchmark datasets demonstrate that QEM achieves superior robustness over mainstream quantum gradient methods (e.g., quantum SGD and Adam), and attains state-of-the-art performance in both classification and generative modeling tasks.
📝 Abstract
We develop a quantum version of the em algorithm for training quantum Boltzmann machines. The em algorithm is an information-geometric extension of the well-known expectation-maximization (EM) algorithm, offering a structured alternative to gradient-based methods with potential advantages in stability and convergence. We implement the algorithm on a semi-quantum restricted Boltzmann machine, where quantum effects are confined to the hidden layer. This structure enables analytical update rules while preserving quantum expressivity. Numerical experiments on benchmark datasets show that the proposed method achieves stable learning and outperforms gradient-based training in several cases. These results demonstrate the potential of information-geometric optimization for quantum machine learning, particularly in settings where standard methods struggle due to non-commutativity or vanishing gradients.