Harnessing the Universal Geometry of Embeddings

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the security challenge of incomparability and poor transferability of text embeddings across models in vector databases. We propose the first unsupervised embedding space translation method that requires neither paired data, pretrained encoders, nor prior matching sets. Grounded in the BERT-based representational hypothesis, our approach constructs a universal semantic latent space and achieves cross-architectural, cross-scale embedding alignment via geometric consistency constraints, latent-space normalization, and spherical similarity optimization on high-dimensional unit spheres. Experiments demonstrate high cosine similarity mapping between heterogeneous models and enable downstream tasks—including document classification and attribute inference—using embeddings alone. Crucially, this work provides the first empirical evidence that text embedding spaces possess a universal, transferable geometric semantic structure. Our method establishes a new paradigm for enhancing security and interoperability in vector databases.

Technology Category

Application Category

📝 Abstract
We introduce the first method for translating text embeddings from one vector space to another without any paired data, encoders, or predefined sets of matches. Our unsupervised approach translates any embedding to and from a universal latent representation (i.e., a universal semantic structure conjectured by the Platonic Representation Hypothesis). Our translations achieve high cosine similarity across model pairs with different architectures, parameter counts, and training datasets. The ability to translate unknown embeddings into a different space while preserving their geometry has serious implications for the security of vector databases. An adversary with access only to embedding vectors can extract sensitive information about the underlying documents, sufficient for classification and attribute inference.
Problem

Research questions and friction points this paper is trying to address.

Translate text embeddings without paired data or encoders
Create universal latent representation for embeddings
Address security risks in vector databases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised text embedding translation without paired data
Universal latent representation for semantic structure
Preserves geometry across different model architectures
🔎 Similar Papers
No similar papers found.