Fundamentals of quantum Boltzmann machine learning with visible and hidden units

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the training challenge of quantum Boltzmann machines (QBMs) with visible and hidden units in quantum generative modeling. We establish, for the first time, a rigorous gradient-theoretic framework targeting quantum relative entropy and Petz–Tsallis relative entropy. We derive analytically tractable gradient expressions, introduce a derivative-of-matrix-power lemma, and design a quantum gradient estimation algorithm leveraging modular-flow-generated unitary rotations and the Petz recovery map. The algorithm unifies treatment of quantum–classical hybrid latent-variable scenarios and is hardware-deployable. Our contribution provides the first theoretically sound, practically implementable, and provably convergent training scheme for latent-variable quantum generative models. It significantly enhances trainability in quantum state learning and related tasks, bridging a critical gap between theoretical quantum information measures and practical quantum machine learning algorithms.

Technology Category

Application Category

📝 Abstract
One of the primary applications of classical Boltzmann machines is generative modeling, wherein the goal is to tune the parameters of a model distribution so that it closely approximates a target distribution. Training relies on estimating the gradient of the relative entropy between the target and model distributions, a task that is well understood when the classical Boltzmann machine has both visible and hidden units. For some years now, it has been an obstacle to generalize this finding to quantum state learning with quantum Boltzmann machines that have both visible and hidden units. In this paper, I derive an analytical expression for the gradient of the quantum relative entropy between a target quantum state and the reduced state of the visible units of a quantum Boltzmann machine. Crucially, this expression is amenable to estimation on a quantum computer, as it involves modular-flow-generated unitary rotations reminiscent of those appearing in my prior work on rotated Petz recovery maps. This leads to a quantum algorithm for gradient estimation in this setting. I then specialize the setting to quantum visible units and classical hidden units, and vice versa, and provide analytical expressions for the gradients, along with quantum algorithms for estimating them. Finally, I replace the quantum relative entropy objective function with the Petz-Tsallis relative entropy; here I develop an analytical expression for the gradient and sketch a quantum algorithm for estimating it, as an application of a novel formula for the derivative of the matrix power function, which also involves modular-flow-generated unitary rotations. Ultimately, this paper demarcates progress in training quantum Boltzmann machines with visible and hidden units for generative modeling and quantum state learning.
Problem

Research questions and friction points this paper is trying to address.

Derives gradient expression for quantum relative entropy in quantum Boltzmann machines
Develops quantum algorithms for gradient estimation with visible and hidden units
Extends approach to Petz-Tsallis entropy using modular-flow-generated unitaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analytical gradient expression for quantum relative entropy
Quantum algorithm for gradient estimation using modular flows
Extension to Petz-Tsallis entropy with matrix power derivatives
🔎 Similar Papers
No similar papers found.