Non-negative Tensor Mixture Learning for Discrete Density Estimation

📅 2024-05-28
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Nonnegative tensor decomposition under KL-divergence minimization suffers from reliance on iterative M-step updates and manual learning-rate tuning, while existing methods lack a unified framework for modeling diverse low-rank structures (e.g., CP, Tucker, Tensor Train). Method: We propose an adaptive hybrid tensor learning method grounded in the EM framework. It unifies CP, Tucker, and Tensor Train decompositions—including their weighted combinations—within a single probabilistic model, where mixture weights are learned automatically from data. Crucially, the M-step employs closed-form multiplicative updates via multivariate approximation, eliminating iterative optimization and hyperparameter tuning. Results: Evaluated on discrete probability density estimation, our method significantly outperforms conventional tensor approaches in both classification accuracy and density modeling fidelity, demonstrating superior generalization. It establishes a new paradigm for KL-divergence-based nonnegative tensor learning—efficient, robust, and structurally flexible.

Technology Category

Application Category

📝 Abstract
We present an expectation-maximization (EM) based unified framework for non-negative tensor decomposition that optimizes the Kullback-Leibler divergence. To avoid iterations in each M-step and learning rate tuning, we establish a general relationship between low-rank decompositions and many-body approximations. Using this connection, we exploit that the closed-form solution of the many-body approximation updates all parameters simultaneously in the M-step. Our framework offers not only a unified methodology for a variety of low-rank structures, including CP, Tucker, and Tensor Train decompositions, but also their mixtures. Notably, the weights of each low-rank tensor in the mixture can be learned from the data, which enables us to leverage the advantage of different low-rank structures without careful selection of the structure in advance. We empirically demonstrate that our framework overall provides superior generalization in terms of discrete density estimation and classification when compared to conventional tensor-based approaches.
Problem

Research questions and friction points this paper is trying to address.

Develops EM-based framework for non-negative tensor decomposition
Unifies CP, Tucker, and Tensor Train decomposition methods
Improves discrete density estimation and classification accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

EM-based framework for non-negative tensor decomposition
Closed-form solution updates parameters simultaneously
Learns weights of low-rank tensor mixtures
🔎 Similar Papers
No similar papers found.