Query Drift Compensation: Enabling Compatibility in Continual Learning of Retrieval Embedding Models

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In dynamic scenarios, continual updates to text embedding models render historical indices obsolete, while full re-indexing incurs prohibitive computational costs. Method: This paper introduces a query drift compensation mechanism that projects query embeddings generated by the updated model into the original embedding space during retrieval—enabling cross-version embedding compatibility and mitigating catastrophic forgetting of historical knowledge for the first time. The approach integrates embedding distillation, query-space alignment, and dense retrieval modeling. Contribution/Results: We construct the first large-scale benchmark for continual learning in retrieval. Experiments demonstrate that our method restores over 95% of the original task’s retrieval accuracy without re-indexing, significantly outperforming baselines in cross-task generalization.

Technology Category

Application Category

📝 Abstract
Text embedding models enable semantic search, powering several NLP applications like Retrieval Augmented Generation by efficient information retrieval (IR). However, text embedding models are commonly studied in scenarios where the training data is static, thus limiting its applications to dynamic scenarios where new training data emerges over time. IR methods generally encode a huge corpus of documents to low-dimensional embeddings and store them in a database index. During retrieval, a semantic search over the corpus is performed and the document whose embedding is most similar to the query embedding is returned. When updating an embedding model with new training data, using the already indexed corpus is suboptimal due to the non-compatibility issue, since the model which was used to obtain the embeddings of the corpus has changed. While re-indexing of old corpus documents using the updated model enables compatibility, it requires much higher computation and time. Thus, it is critical to study how the already indexed corpus can still be effectively used without the need of re-indexing. In this work, we establish a continual learning benchmark with large-scale datasets and continually train dense retrieval embedding models on query-document pairs from new datasets in each task and observe forgetting on old tasks due to significant drift of embeddings. We employ embedding distillation on both query and document embeddings to maintain stability and propose a novel query drift compensation method during retrieval to project new model query embeddings to the old embedding space. This enables compatibility with previously indexed corpus embeddings extracted using the old model and thus reduces the forgetting. We show that the proposed method significantly improves performance without any re-indexing. Code is available at https://github.com/dipamgoswami/QDC.
Problem

Research questions and friction points this paper is trying to address.

Addressing non-compatibility in dynamic retrieval embedding models
Reducing computational cost of re-indexing old corpus documents
Mitigating forgetting in continual learning of embedding models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding distillation maintains stability of embeddings
Query drift compensation projects new embeddings
Enables compatibility without re-indexing old corpus
🔎 Similar Papers
No similar papers found.