Information Leakage of Sentence Embeddings via Generative Embedding Inversion Attacks

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals a critical privacy vulnerability in sentence embeddings under Generative Embedding Inversion Attacks (GEIAs), wherein sensitive information—including original input sentences and semantically sensitive content from pretraining corpora—can be reconstructed from publicly released embeddings. To overcome the limitations of existing inversion methods, which rely on model fine-tuning or architectural modifications, we propose a novel, framework-agnostic inversion analysis approach: likelihood modeling in embedding space based on the log-likelihood difference between masked and original inputs, augmented by statistical bias analysis over pretraining corpora. We systematically evaluate our method across diverse sentence embedding models—including BERT, SimCSE, and Sentence-BERT—and demonstrate its robustness and efficiency in extracting sensitive knowledge internalized during pretraining. Our findings substantially broaden the understanding of embedding privacy risks and establish a new paradigm for rigorous security assessment of sentence embeddings.

Technology Category

Application Category

📝 Abstract
Text data are often encoded as dense vectors, known as embeddings, which capture semantic, syntactic, contextual, and domain-specific information. These embeddings, widely adopted in various applications, inherently contain rich information that may be susceptible to leakage under certain attacks. The GEIA framework highlights vulnerabilities in sentence embeddings, demonstrating that they can reveal the original sentences they represent. In this study, we reproduce GEIA's findings across various neural sentence embedding models. Additionally, we contribute new analysis to examine whether these models leak sensitive information from their training datasets. We propose a simple yet effective method without any modification to the attacker's architecture proposed in GEIA. The key idea is to examine differences between log-likelihood for masked and original variants of data that sentence embedding models have been pre-trained on, calculated on the embedding space of the attacker. Our findings indicate that following our approach, an adversary party can recover meaningful sensitive information related to the pre-training knowledge of the popular models used for creating sentence embeddings, seriously undermining their security. Our code is available on: https://github.com/taslanidis/GEIA
Problem

Research questions and friction points this paper is trying to address.

Examines vulnerabilities in sentence embeddings to information leakage
Proposes method to recover sensitive pre-training data from embeddings
Assesses security risks of generative embedding inversion attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Embedding Inversion Attacks framework
Log-likelihood comparison for data variants
Recover sensitive pre-training information
🔎 Similar Papers
No similar papers found.