DO-EM: Density Operator Expectation Maximization

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of scalable expectation-maximization (EM) training frameworks for Density Operator Models (DOMs) on classical hardware. To this end, we propose DO-EM—the first general EM framework adapted to quantum-inspired generative models. To overcome the absence of classical conditional probability analogues in DOMs—which impedes conventional E-step design—we innovatively integrate quantum information projection with the Petz recovery map, yielding a rigorously provable Quantum Evidence Lower Bound (QELBO) optimization framework; we prove that QELBO maximization guarantees monotonic non-decrease of the log-likelihood. By combining Minorant-Maximization with contrastive divergence, DO-EM enables efficient training on classical devices. Evaluated on MNIST image generation, the QiDBM model built upon DO-EM achieves significantly lower Fréchet Inception Distance (FID) than classical deep Boltzmann machines of comparable size—reducing FID by 40–60%—while incurring comparable training overhead.

Technology Category

Application Category

📝 Abstract
Density operators, quantum generalizations of probability distributions, are gaining prominence in machine learning due to their foundational role in quantum computing. Generative modeling based on density operator models ( extbf{DOMs}) is an emerging field, but existing training algorithms -- such as those for the Quantum Boltzmann Machine -- do not scale to real-world data, such as the MNIST dataset. The Expectation-Maximization algorithm has played a fundamental role in enabling scalable training of probabilistic latent variable models on real-world datasets. extit{In this paper, we develop an Expectation-Maximization framework to learn latent variable models defined through extbf{DOMs} on classical hardware, with resources comparable to those used for probabilistic models, while scaling to real-world data.} However, designing such an algorithm is nontrivial due to the absence of a well-defined quantum analogue to conditional probability, which complicates the Expectation step. To overcome this, we reformulate the Expectation step as a quantum information projection (QIP) problem and show that the Petz Recovery Map provides a solution under sufficient conditions. Using this formulation, we introduce the Density Operator Expectation Maximization (DO-EM) algorithm -- an iterative Minorant-Maximization procedure that optimizes a quantum evidence lower bound. We show that the extbf{DO-EM} algorithm ensures non-decreasing log-likelihood across iterations for a broad class of models. Finally, we present Quantum Interleaved Deep Boltzmann Machines ( extbf{QiDBMs}), a extbf{DOM} that can be trained with the same resources as a DBM. When trained with extbf{DO-EM} under Contrastive Divergence, a extbf{QiDBM} outperforms larger classical DBMs in image generation on the MNIST dataset, achieving a 40--60% reduction in the Fréchet Inception Distance.
Problem

Research questions and friction points this paper is trying to address.

Scaling density operator models to real-world data like MNIST
Developing EM framework for DOMs on classical hardware
Overcoming absence of quantum conditional probability in EM
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expectation-Maximization framework for DOMs
Quantum information projection solves Expectation step
QiDBMs trained with DO-EM outperform classical DBMs
🔎 Similar Papers
A
Adit Vishnu
Department of Computer Science and Automation, Indian Institute of Science, Bangalore
A
Abhay Shastry
Department of Computer Science and Automation, Indian Institute of Science, Bangalore
D
Dhruva Kashyap
Department of Computer Science and Automation, Indian Institute of Science, Bangalore
Chiranjib Bhattacharyya
Chiranjib Bhattacharyya
Professor, Indian Institute of Science
Machine Learning