🤖 AI Summary
In hyperbolic latent variable models, geodesic interpolation often deviates from the underlying data manifold, leading to high predictive uncertainty. To address this, we propose the Hyperbolic Gaussian Process Latent Variable Model (Hyperbolic GPLVM), the first method to incorporate a pullback metric into hyperbolic latent space. This ensures that geodesics preserve hyperbolic geometry while aligning with the empirical data distribution—enabling probabilistically aware path planning. Our approach integrates Gaussian process priors, Riemannian manifold learning, and geodesic optimization. Extensive experiments on hierarchical datasets demonstrate that our model significantly reduces predictive variance, yields interpolation paths closely conforming to the true data manifold, and improves embedding quality and robustness across downstream tasks—including classification and reconstruction.
📝 Abstract
Gaussian Process Latent Variable Models (GPLVMs) have proven effective in capturing complex, high-dimensional data through lower-dimensional representations. Recent advances show that using Riemannian manifolds as latent spaces provides more flexibility to learn higher quality embeddings. This paper focuses on the hyperbolic manifold, a particularly suitable choice for modeling hierarchical relationships. While previous approaches relied on hyperbolic geodesics for interpolating the latent space, this often results in paths crossing low-data regions, leading to highly uncertain predictions. Instead, we propose augmenting the hyperbolic metric with a pullback metric to account for distortions introduced by the GPLVM's nonlinear mapping. Through various experiments, we demonstrate that geodesics on the pullback metric not only respect the geometry of the hyperbolic latent space but also align with the underlying data distribution, significantly reducing uncertainty in predictions.