🤖 AI Summary
Existing Riemannian metrics for symmetric positive-definite (SPD) operators lack a unified geometric framework in infinite-dimensional Hilbert spaces, leading to theoretical limitations and poor geometric stability in high-dimensional or functional data analysis.
Method: We propose the Generalized Alpha-Procrustes Riemannian metric—a novel framework integrating normalized Hilbert–Schmidt operators, an extended Mahalanobis norm, and learnable regularization parameters.
Contribution/Results: This is the first unified infinite-dimensional formulation encompassing generalized Bures–Wasserstein, Log-Hilbert–Schmidt (a natural extension of Log-Euclidean geometry), and Wasserstein-type distances. It resolves fundamental structural deficiencies of prior metrics in infinite dimensions, substantially improving geometric stability and scale robustness for comparing high-dimensional and functional data. Extensive experiments across multi-scale benchmarks demonstrate consistent superiority over state-of-the-art methods. The framework provides a theoretically rigorous yet practically effective Riemannian foundation for machine learning and functional data analysis.
📝 Abstract
This work extends the recently introduced Alpha-Procrustes family of Riemannian metrics for symmetric positive definite (SPD) matrices by incorporating generalized versions of the Bures-Wasserstein (GBW), Log-Euclidean, and Wasserstein distances. While the Alpha-Procrustes framework has unified many classical metrics in both finite- and infinite- dimensional settings, it previously lacked the structural components necessary to realize these generalized forms. We introduce a formalism based on unitized Hilbert-Schmidt operators and an extended Mahalanobis norm that allows the construction of robust, infinite-dimensional generalizations of GBW and Log-Hilbert-Schmidt distances. Our approach also incorporates a learnable regularization parameter that enhances geometric stability in high-dimensional comparisons. Preliminary experiments reproducing benchmarks from the literature demonstrate the improved performance of our generalized metrics, particularly in scenarios involving comparisons between datasets of varying dimension and scale. This work lays a theoretical and computational foundation for advancing robust geometric methods in machine learning, statistical inference, and functional data analysis.