🤖 AI Summary
This work addresses the lack of a unified probabilistic foundation for machine learning on the symmetric positive-definite (SPD) matrix manifold. We propose the first systematic Riemannian Bayesian framework, establishing the implicit Bayesian optimality of mainstream SPD classifiers—including Log-Euclidean SVM and Affine-Invariant Classifier—for the first time. Leveraging this insight, we construct a family of Riemannian Gaussian distributions compatible with multiple geometric metrics (Log-Euclidean, Affine-Invariant, and Bures–Wasserstein), and unify classification, anomaly detection, and manifold dimensionality reduction within a coherent Bayesian decision-theoretic framework. Our approach endows existing SPD-based methods with interpretable probabilistic semantics, enables cross-task knowledge transfer, and facilitates scalable algorithm design. The framework significantly enhances both theoretical consistency and practical applicability in SPD data modeling, thereby establishing a general probabilistic foundation for Riemannian machine learning.
📝 Abstract
The goal of this paper is to show how different machine learning tools on the Riemannian manifold $mathcal{P}_d$ of Symmetric Positive Definite (SPD) matrices can be united under a probabilistic framework. For this, we will need several Gaussian distributions defined on $mathcal{P}_d$. We will show how popular classifiers on $mathcal{P}_d$ can be reinterpreted as Bayes Classifiers using these Gaussian distributions. These distributions will also be used for outlier detection and dimension reduction. By showing that those distributions are pervasive in the tools used on $mathcal{P}_d$, we allow for other machine learning tools to be extended to $mathcal{P}_d$.