Context is Gold to find the Gold Passage: Evaluating and Training Contextual Document Embeddings

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing document retrieval embedding methods typically encode paragraphs in isolation, neglecting global document context and thereby compromising semantic consistency among paragraph representations. This work systematically reveals, for the first time, the critical impact of document-level context on paragraph retrieval and introduces ConTEB—the first context-aware evaluation benchmark for this task. To address the limitation, we propose InSeNT, a contrastive post-training method that innovatively integrates delayed chunk pooling to enhance paragraph modeling of global document context without degrading base model performance. Experiments demonstrate that InSeNT significantly improves retrieval quality on ConTEB and exhibits superior robustness to diverse chunking strategies and large-scale corpora. All code, data, and models are publicly released.

Technology Category

Application Category

📝 Abstract
A limitation of modern document retrieval embedding methods is that they typically encode passages (chunks) from the same documents independently, often overlooking crucial contextual information from the rest of the document that could greatly improve individual chunk representations. In this work, we introduce ConTEB (Context-aware Text Embedding Benchmark), a benchmark designed to evaluate retrieval models on their ability to leverage document-wide context. Our results show that state-of-the-art embedding models struggle in retrieval scenarios where context is required. To address this limitation, we propose InSeNT (In-sequence Negative Training), a novel contrastive post-training approach which combined with late chunking pooling enhances contextual representation learning while preserving computational efficiency. Our method significantly improves retrieval quality on ConTEB without sacrificing base model performance. We further find chunks embedded with our method are more robust to suboptimal chunking strategies and larger retrieval corpus sizes. We open-source all artifacts at https://github.com/illuin-tech/contextual-embeddings.
Problem

Research questions and friction points this paper is trying to address.

Evaluating document retrieval models' context utilization
Improving chunk representations with document-wide context
Enhancing retrieval robustness to chunking strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces ConTEB for context-aware embedding evaluation
Proposes InSeNT for contrastive post-training enhancement
Uses late chunking pooling for efficient representation learning
🔎 Similar Papers
No similar papers found.