Adapting General-Purpose Embedding Models to Private Datasets Using Keyword-based Retrieval

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generic text embedding models suffer significant performance degradation in retrieval tasks over enterprise private data—particularly when such data contains abundant domain-specific terminology. To address this, we propose BMEmbed, the first method to leverage unsupervised BM25 ranking outputs as supervision signals for contrastive learning, enabling lightweight, efficient, and annotation-free domain adaptation. Theoretically and empirically, we demonstrate that BM25-derived signals jointly optimize both alignment and uniformity of the learned embedding space. BMEmbed is model-agnostic and compatible with diverse foundation embedding models (e.g., BGE, E5). Extensive experiments across multiple private domain datasets show an average 12.7% improvement in Mean Reciprocal Rank (MRR), confirming strong generalization capability. Our implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Text embedding models play a cornerstone role in AI applications, such as retrieval-augmented generation (RAG). While general-purpose text embedding models demonstrate strong performance on generic retrieval benchmarks, their effectiveness diminishes when applied to private datasets (e.g., company-specific proprietary data), which often contain specialized terminology and lingo. In this work, we introduce BMEmbed, a novel method for adapting general-purpose text embedding models to private datasets. By leveraging the well-established keyword-based retrieval technique (BM25), we construct supervisory signals from the ranking of keyword-based retrieval results to facilitate model adaptation. We evaluate BMEmbed across a range of domains, datasets, and models, showing consistent improvements in retrieval performance. Moreover, we provide empirical insights into how BM25-based signals contribute to improving embeddings by fostering alignment and uniformity, highlighting the value of this approach in adapting models to domain-specific data. We release the source code available at https://github.com/BaileyWei/BMEmbed for the research community.
Problem

Research questions and friction points this paper is trying to address.

Adapting general-purpose embedding models to private datasets
Improving retrieval performance on specialized terminology
Enhancing embeddings via BM25-based alignment and uniformity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts general-purpose embedding models to private datasets
Uses keyword-based retrieval (BM25) for supervision
Improves alignment and uniformity in embeddings
🔎 Similar Papers
No similar papers found.
Y
Yubai Wei
Hong Kong University of Science and Technology
Jiale Han
Jiale Han
The Hong Kong University of Science and Technology
Natural Language Processing
Y
Yi Yang
Hong Kong University of Science and Technology