SemPA: Improving Sentence Embeddings of Large Language Models through Semantic Preference Alignment

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of traditional sentence embedding methods when applied to large language models (LLMs), where fixed prompt templates yield suboptimal performance and architectural modifications compromise generative capabilities. To overcome this, we propose a novel paradigm that enhances sentence embeddings without altering the model architecture, leveraging sentence-level Direct Preference Optimization (DPO) to align with semantic preferences while preserving the LLM’s generative capacity. We establish a theoretical connection between DPO and contrastive learning under the Plackett–Luce model and train the system using semantic equivalence judgments. Experimental results demonstrate significant improvements in semantic representation across multiple semantic textual similarity benchmarks and LLM evaluation tasks, all without degrading the model’s original generative performance.

Technology Category

Application Category

📝 Abstract
Traditional sentence embedding methods employ token-level contrastive learning on non-generative pre-trained models. Recently, there have emerged embedding methods based on generative large language models (LLMs). These methods either rely on fixed prompt templates or involve modifications to the model architecture. The former lacks further optimization of the model and results in limited performance, while the latter alters the internal computational mechanisms of the model, thereby compromising its generative capabilities. We propose SemPA, a novel approach that boosts the sentence representations while preserving the generative ability of LLMs via semantic preference alignment. We leverage sentence-level Direct Preference Optimization (DPO) to efficiently optimize LLMs on a paraphrase generation task, where the model learns to discriminate semantically equivalent sentences while preserving inherent generative capacity. Theoretically, we establish a formal connection between DPO and contrastive learning under the Plackett-Luce model framework. Empirically, experimental results on both semantic textual similarity tasks and various benchmarks for LLMs show that SemPA achieves better semantic representations without sacrificing the inherent generation capability of LLMs.
Problem

Research questions and friction points this paper is trying to address.

sentence embeddings
large language models
generative capability
semantic representation
preference alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic Preference Alignment
Sentence Embeddings
Direct Preference Optimization
Large Language Models
Contrastive Learning
🔎 Similar Papers
No similar papers found.