π€ AI Summary
This work addresses the parameter ambiguity and inefficient training inherent in conventional matrix product state (MPS)βbased generative models. The authors propose a unitary MPS framework that reformulates probabilistic modeling as a manifold-constrained optimization problem and, for the first time, introduces Riemannian optimization techniques to this setting. By integrating a spatially decoupled algorithm, the method enables efficient and stable training while effectively eliminating parameter indeterminacy. Empirical evaluations on the Bars-and-Stripes and EMNIST datasets demonstrate the modelβs rapid structural adaptability, superior generative performance, and favorable computational efficiency, achieving a balanced trade-off between expressivity and scalability.
π Abstract
Tensor networks, which are originally developed for characterizing complex quantum many-body systems, have recently emerged as a powerful framework for capturing high-dimensional probability distributions with strong physical interpretability. This paper systematically studies matrix product states (MPS) for generative modeling and shows that unitary MPS, which is a tensor-network architecture that is both simple and expressive, offers clear benefits for unsupervised learning by reducing ambiguity in parameter updates and improving efficiency. To overcome the inefficiency of standard gradient-based MPS training, we develop a Riemannian optimization approach that casts probabilistic modeling as an optimization problem with manifold constraints, and further derive an efficient space-decoupling algorithm. Experiments on Bars-and-Stripes and EMNIST datasets demonstrate fast adaptation to data structure, stable updates, and strong performance while maintaining the efficiency and expressive power of MPS.