Query Expansion in the Age of Pre-trained and Large Language Models: A Comprehensive Survey

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modern information retrieval faces lexical mismatch between short queries and dynamic, heterogeneous corpora, necessitating query expansion (QE) methods adapted to the large language model (LLM) era. This paper proposes a four-dimensional analytical framework—covering injection mechanisms, knowledge fusion, alignment learning, and knowledge graph grounding—to systematically survey PLM/LLM-driven QE techniques across encoder-only, encoder-decoder, decoder-only, instruction-tuned, and multilingual architectures, while integrating external knowledge bases, conversational interaction, and knowledge graphs. Experiments span seven retrieval scenarios, demonstrating LLMs’ strengths in zero-shot QE and controllable generation, and identifying robust strategies to mitigate topic drift and hallucination. The work establishes a unified taxonomy for QE and opens new research directions in quality control, cost optimization, domain adaptation, and fairness evaluation.

Technology Category

Application Category

📝 Abstract
Modern information retrieval (IR) must bridge short, ambiguous queries and ever more diverse, rapidly evolving corpora. Query Expansion (QE) remains a key mechanism for mitigating vocabulary mismatch, but the design space has shifted markedly with pre-trained language models (PLMs) and large language models (LLMs). This survey synthesizes the field from three angles: (i) a four-dimensional framework of query expansion - from the point of injection (explicit vs. implicit QE), through grounding and interaction (knowledge bases, model-internal capabilities, multi-turn retrieval) and learning alignment, to knowledge graph-based argumentation; (ii) a model-centric taxonomy spanning encoder-only, encoder-decoder, decoder-only, instruction-tuned, and domain/multilingual variants, highlighting their characteristic affordances for QE (contextual disambiguation, controllable generation, zero-/few-shot reasoning); and (iii) practice-oriented guidance on where and how neural QE helps in first-stage retrieval, multi-query fusion, re-ranking, and retrieval-augmented generation (RAG). We compare traditional query expansion with PLM/LLM-based methods across seven key aspects, and we map applications across web search, biomedicine, e-commerce, open-domain QA/RAG, conversational and code search, and cross-lingual settings. The review distills design grounding and interaction, alignment/distillation (SFT/PEFT/DPO), and KG constraints - as robust remedies to topic drift and hallucination. We conclude with an agenda on quality control, cost-aware invocation, domain/temporal adaptation, evaluation beyond end-task metrics, and fairness/privacy. Collectively, these insights provide a principled blueprint for selecting and combining QE techniques under real-world constraints.
Problem

Research questions and friction points this paper is trying to address.

Addressing vocabulary mismatch in information retrieval with query expansion
Surveying PLM and LLM impacts on query expansion design and methods
Providing guidance on neural query expansion applications across domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging pre-trained and large language models for query expansion
Employing knowledge graphs to mitigate topic drift and hallucination
Integrating query expansion into retrieval-augmented generation systems
🔎 Similar Papers
No similar papers found.
M
Minghan Li
School of Computer Science and Technology, Soochow University, China
X
Xinxuan Lv
School of Computer Science and Technology, Soochow University, China
J
Junjie Zou
School of Computer Science and Technology, Soochow University, China
T
Tongna Chen
School of Computer Science and Technology, Soochow University, China
C
Chao Zhang
School of Computer Science and Technology, Soochow University, China
S
Suchao An
School of Computer Science and Technology, Soochow University, China
Ercong Nie
Ercong Nie
LMU Munich, MCML
Computational LinguisticsNatural Language Processing
Guodong Zhou
Guodong Zhou
Soochow University, China
Natural Language ProcessingArtificial Intelligence