🤖 AI Summary
To address the significant increase in inverted-index retrieval latency caused by high-document-frequency (DF) terms in the SPLADE-Doc model under production settings, this paper proposes DF-FLOPS regularization: a document-frequency-aware dynamic weight penalty imposed atop FLOPS constraints to explicitly suppress activation of high-DF terms while preserving their semantic utility where necessary. The method enables end-to-end optimization of sparse vector learning, substantially improving retrieval efficiency without compromising semantic modeling capability. Experiments show a ~10× reduction in retrieval latency—reaching BM25-level speed—while in-domain MRR@10 drops by only 2.2 points; cross-domain evaluation across 13 tasks yields superior performance over the baseline in 12. The core contribution is the first explicit incorporation of term DF as a sparsity control factor in learned sparse retrieval (LSR), jointly optimizing efficiency and relevance and thereby enhancing industrial deployability of LSR models.
📝 Abstract
Learned Sparse Retrieval (LSR) models encode text as weighted term vectors, which need to be sparse to leverage inverted index structures during retrieval. SPLADE, the most popular LSR model, uses FLOPS regularization to encourage vector sparsity during training. However, FLOPS regularization does not ensure sparsity among terms - only within a given query or document. Terms with very high Document Frequencies (DFs) substantially increase latency in production retrieval engines, such as Apache Solr, due to their lengthy posting lists. To address the issue of high DFs, we present a new variant of FLOPS regularization: DF-FLOPS. This new regularization technique penalizes the usage of high-DF terms, thereby shortening posting lists and reducing retrieval latency. Unlike other inference-time sparsification methods, such as stopword removal, DF-FLOPS regularization allows for the selective inclusion of high-frequency terms in cases where the terms are truly salient. We find that DF-FLOPS successfully reduces the prevalence of high-DF terms and lowers retrieval latency (around 10x faster) in a production-grade engine while maintaining effectiveness both in-domain (only a 2.2-point drop in MRR@10) and cross-domain (improved performance in 12 out of 13 tasks on which we tested). With retrieval latencies on par with BM25, this work provides an important step towards making LSR practical for deployment in production-grade search engines.