🤖 AI Summary
This work addresses the limitations of traditional pseudo-relevance feedback (PRF), which is prone to topic drift due to noise in initial retrieval results, and pure large language model (LLM)-based query expansion, which often suffers from hallucination and terminological mismatch. To mitigate these issues, the authors propose a two-stage hybrid query expansion approach that integrates an LLM as a relevance filter prior to the RM3 process, selecting only the top-k documents deemed relevant by the LLM for subsequent expansion. By combining the semantic comprehension capabilities of LLMs with the robustness of classical PRF, the method effectively curbs topic drift while preserving interpretability. Experimental results demonstrate that the proposed approach significantly outperforms conventional blind PRF and strong baseline models across multiple standard datasets and evaluation metrics.
📝 Abstract
Query expansion is a long-standing technique to mitigate vocabulary mismatch in ad hoc Information Retrieval. Pseudo-relevance feedback methods, such as RM3, estimate an expanded query model from the top-ranked documents, but remain vulnerable to topic drift when early results include noisy or tangential content. Recent approaches instead prompt Large Language Models to generate synthetic expansions or query variants. While effective, these methods risk hallucinations and misalignment with collection-specific terminology. We propose a hybrid alternative that preserves the robustness and interpretability of classical PRF while leveraging LLM semantic judgement. Our method inserts an LLM-based filtering stage prior to RM3 estimation: the LLM judges the documents in the initial top-$k$ ranking, and RM3 is computed only over those accepted as relevant. This simple intervention improves over blind PRF and a strong baseline across several datasets and metrics.