Connecting Neural Models Latent Geometries with Relative Geodesic Representations

📅 2025-06-02
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Neural models trained on identical tasks exhibit geometrically heterogeneous latent representations due to stochasticity and architectural differences, rendering cross-model representations incomparable. To address this, we propose relative geodesic representations grounded in pullback metrics—a novel application of differential-geometric pullback metrics to latent space alignment—explicitly modeling intrinsic geometric transformations between latent manifolds of distinct models. Unlike conventional linear alignment methods, our approach operates without supervision and generalizes across diverse architectures and pretraining paradigms. Experiments on autoencoders and vision foundation discriminative models demonstrate substantial improvements in cross-model retrieval accuracy and model stitching performance. Moreover, the method scales effectively to large-scale settings.

Technology Category

Application Category

📝 Abstract
Neural models learn representations of high-dimensional data on low-dimensional manifolds. Multiple factors, including stochasticities in the training process, model architectures, and additional inductive biases, may induce different representations, even when learning the same task on the same data. However, it has recently been shown that when a latent structure is shared between distinct latent spaces, relative distances between representations can be preserved, up to distortions. Building on this idea, we demonstrate that exploiting the differential-geometric structure of latent spaces of neural models, it is possible to capture precisely the transformations between representational spaces trained on similar data distributions. Specifically, we assume that distinct neural models parametrize approximately the same underlying manifold, and introduce a representation based on the pullback metric that captures the intrinsic structure of the latent space, while scaling efficiently to large models. We validate experimentally our method on model stitching and retrieval tasks, covering autoencoders and vision foundation discriminative models, across diverse architectures, datasets, and pretraining schemes.
Problem

Research questions and friction points this paper is trying to address.

Understanding differences in neural model latent representations
Mapping transformations between similar data distribution spaces
Validating method on model stitching and retrieval tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relative geodesic representations connect latent geometries
Pullback metric captures intrinsic latent space structure
Validated on model stitching and retrieval tasks
🔎 Similar Papers
No similar papers found.