EnrichIndex: Using LLMs to Enrich Retrieval Indices Offline

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing retrieval systems struggle to capture implicit relevance between documents and queries—such as term- or structure-based associations in technical texts or tables—while online LLM-based re-ranking incurs prohibitive latency and computational overhead. To address this, we propose EnrichIndex: the first offline LLM indexing augmentation paradigm that semantically enriches documents via a single pass over the corpus, leveraging an LLM to construct a hybrid retrieval index integrating dense vectors and lexical terms—without any online LLM invocation. This enriched index seamlessly integrates with existing re-rankers. Evaluated on five multimodal (text-and-image) retrieval tasks, EnrichIndex achieves average improvements of +11.7 in Recall@10 and +10.6 in NDCG@10, while reducing online LLM token consumption by 293.3×. The approach thus significantly balances effectiveness, efficiency, and cost.

Technology Category

Application Category

📝 Abstract
Existing information retrieval systems excel in cases where the language of target documents closely matches that of the user query. However, real-world retrieval systems are often required to implicitly reason whether a document is relevant. For example, when retrieving technical texts or tables, their relevance to the user query may be implied through a particular jargon or structure, rather than explicitly expressed in their content. Large language models (LLMs) hold great potential in identifying such implied relevance by leveraging their reasoning skills. Nevertheless, current LLM-augmented retrieval is hindered by high latency and computation cost, as the LLM typically computes the query-document relevance online, for every query anew. To tackle this issue we introduce EnrichIndex, a retrieval approach which instead uses the LLM offline to build semantically-enriched retrieval indices, by performing a single pass over all documents in the retrieval corpus once during ingestion time. Furthermore, the semantically-enriched indices can complement existing online retrieval approaches, boosting the performance of LLM re-rankers. We evaluated EnrichIndex on five retrieval tasks, involving passages and tables, and found that it outperforms strong online LLM-based retrieval systems, with an average improvement of 11.7 points in recall @ 10 and 10.6 points in NDCG @ 10 compared to strong baselines. In terms of online calls to the LLM, it processes 293.3 times fewer tokens which greatly reduces the online latency and cost. Overall, EnrichIndex is an effective way to build better retrieval indices offline by leveraging the strong reasoning skills of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Improving retrieval systems for implied relevance in documents
Reducing latency and cost in LLM-augmented retrieval
Enhancing offline indices using LLMs for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Offline LLM enrichment of retrieval indices
Semantically-enhanced indices boost retrieval
Reduces online LLM calls significantly