LLM-Assisted Pseudo-Relevance Feedback

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of traditional pseudo-relevance feedback (PRF), which is prone to topic drift due to noise in initial retrieval results, and pure large language model (LLM)-based query expansion, which often suffers from hallucination and terminological mismatch. To mitigate these issues, the authors propose a two-stage hybrid query expansion approach that integrates an LLM as a relevance filter prior to the RM3 process, selecting only the top-k documents deemed relevant by the LLM for subsequent expansion. By combining the semantic comprehension capabilities of LLMs with the robustness of classical PRF, the method effectively curbs topic drift while preserving interpretability. Experimental results demonstrate that the proposed approach significantly outperforms conventional blind PRF and strong baseline models across multiple standard datasets and evaluation metrics.

Technology Category

Application Category

📝 Abstract
Query expansion is a long-standing technique to mitigate vocabulary mismatch in ad hoc Information Retrieval. Pseudo-relevance feedback methods, such as RM3, estimate an expanded query model from the top-ranked documents, but remain vulnerable to topic drift when early results include noisy or tangential content. Recent approaches instead prompt Large Language Models to generate synthetic expansions or query variants. While effective, these methods risk hallucinations and misalignment with collection-specific terminology. We propose a hybrid alternative that preserves the robustness and interpretability of classical PRF while leveraging LLM semantic judgement. Our method inserts an LLM-based filtering stage prior to RM3 estimation: the LLM judges the documents in the initial top-$k$ ranking, and RM3 is computed only over those accepted as relevant. This simple intervention improves over blind PRF and a strong baseline across several datasets and metrics.
Problem

Research questions and friction points this paper is trying to address.

query expansion
pseudo-relevance feedback
topic drift
hallucination
vocabulary mismatch
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pseudo-Relevance Feedback
Large Language Models
Query Expansion
Information Retrieval
Relevance Filtering
🔎 Similar Papers
No similar papers found.