🤖 AI Summary
Zero-shot dense retrieval suffers from semantic ambiguity among top-ranked documents due to the absence of relevance labels for training queries. To address this, we propose a training-free representation sharpening framework that enhances document embeddings via context-aware representation refinement during indexing—improving semantic discriminability without modifying the underlying retriever. Our method is agnostic to any pre-trained dense retriever and incorporates an approximation strategy to balance effectiveness and computational overhead. Evaluated across 20+ multilingual zero-shot benchmarks—including the BRIGHT benchmark—our approach achieves new state-of-the-art performance. Its approximate variant retains over 90% of the full method’s gains while incurring zero additional inference cost. The core contribution is the first unsupervised, fine-tuning-free, and computationally efficient document representation sharpening technique, which significantly alleviates semantic confusion in zero-shot dense retrieval.
📝 Abstract
Zero-shot dense retrieval is a challenging setting where a document corpus is provided without relevant queries, necessitating a reliance on pretrained dense retrievers (DRs). However, since these DRs are not trained on the target corpus, they struggle to represent semantic differences between similar documents. To address this failing, we introduce a training-free representation sharpening framework that augments a document's representation with information that helps differentiate it from similar documents in the corpus. On over twenty datasets spanning multiple languages, the representation sharpening framework proves consistently superior to traditional retrieval, setting a new state-of-the-art on the BRIGHT benchmark. We show that representation sharpening is compatible with prior approaches to zero-shot dense retrieval and consistently improves their performance. Finally, we address the performance-cost tradeoff presented by our framework and devise an indexing-time approximation that preserves the majority of our performance gains over traditional retrieval, yet suffers no additional inference-time cost.