LLM2Vec-Gen: Generative Embeddings from Large Language Models

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of traditional embedding models, which struggle with semantically diverse inputs and rely heavily on paired labeled data. The authors propose a self-supervised embedding approach that introduces trainable special tokens into a frozen large language model (LLM), where these tokens represent the model’s latent generative response to an input, thereby yielding fixed-length embeddings. Departing from conventional encoding paradigms, the method requires no labeled data—training solely on unlabeled queries—and inherits the LLM’s safety alignment and reasoning capabilities. Experimental results on the MTEB benchmark demonstrate a 9.3% improvement in self-supervised performance, a 43.2% reduction in harmful content retrieval, and a 29.3% gain in reasoning ability. Moreover, the resulting embeddings are interpretable and decodable into natural language.

Technology Category

Application Category

📝 Abstract
LLM-based text embedders typically encode the semantic content of their input. However, embedding tasks require mapping diverse inputs to similar outputs. Typically, this input-output is addressed by training embedding models with paired data using contrastive learning. In this work, we propose a novel self-supervised approach, LLM2Vec-Gen, which adopts a different paradigm: rather than encoding the input, we learn to represent the model's potential response. Specifically, we add trainable special tokens to the LLM's vocabulary, append them to input, and optimize them to represent the LLM's response in a fixed-length sequence. Training is guided by the LLM's own completion for the query, along with an unsupervised embedding teacher that provides distillation targets. This formulation helps to bridge the input-output gap and transfers LLM capabilities such as safety alignment and reasoning to embedding tasks. Crucially, the LLM backbone remains frozen and training requires only unlabeled queries. LLM2Vec-Gen achieves state-of-the-art self-supervised performance on the Massive Text Embedding Benchmark (MTEB), improving by 9.3% over the best unsupervised embedding teacher. We also observe up to 43.2% reduction in harmful content retrieval and 29.3% improvement in reasoning capabilities for embedding tasks. Finally, the learned embeddings are interpretable and can be decoded into text to reveal their semantic content.
Problem

Research questions and friction points this paper is trying to address.

text embedding
large language models
self-supervised learning
semantic representation
unsupervised embedding
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative embeddings
self-supervised learning
large language models
embedding distillation
interpretable representations
🔎 Similar Papers
No similar papers found.