Synthetic Data Powers Product Retrieval for Long-tail Knowledge-Intensive Queries in E-commerce Search

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of retrieving relevant items for long-tail, knowledge-intensive queries in e-commerce search, where linguistic diversity, ambiguous intent, and sparse user behavior logs hinder the performance of existing retrieval systems. To overcome these limitations, the authors propose an efficient synthetic data generation framework that leverages a large language model (LLM) with multi-reward signal–guided query rewriting to produce high-quality query–item pairs. These synthetic pairs are integrated into training via an offline high-precision retrieval pipeline, effectively distilling the LLM’s query understanding capabilities into the training data and mitigating distributional shift. Remarkably, incorporating only this synthetic data yields substantial gains in recall relevance, with online side-by-side human evaluations confirming a noticeable improvement in user search experience.

Technology Category

Application Category

📝 Abstract
Product retrieval is the backbone of e-commerce search: for each user query, it identifies a high-recall candidate set from billions of items, laying the foundation for high-quality ranking and user experience. Despite extensive optimization for mainstream queries, existing systems still struggle with long-tail queries, especially knowledge-intensive ones. These queries exhibit diverse linguistic patterns, often lack explicit purchase intent, and require domain-specific knowledge reasoning for accurate interpretation. They also suffer from a shortage of reliable behavioral logs, which makes such queries a persistent challenge for retrieval optimization. To address these issues, we propose an efficient data synthesis framework tailored to retrieval involving long-tail, knowledge-intensive queries. The key idea is to implicitly distill the capabilities of a powerful offline query-rewriting model into an efficient online retrieval system. Leveraging the strong language understanding of LLMs, we train a multi-candidate query rewriting model with multiple reward signals and capture its rewriting capability in well-curated query-product pairs through a powerful offline retrieval pipeline. This design mitigates distributional shift in rewritten queries, which might otherwise limit incremental recall or introduce irrelevant products. Experiments demonstrate that without any additional tricks, simply incorporating this synthetic data into retrieval model training leads to significant improvements. Online Side-By-Side (SBS) human evaluation results indicate a notable enhancement in user search experience.
Problem

Research questions and friction points this paper is trying to address.

long-tail queries
knowledge-intensive queries
product retrieval
e-commerce search
behavioral logs
Innovation

Methods, ideas, or system contributions that make the work stand out.

synthetic data
query rewriting
long-tail queries
knowledge-intensive retrieval
LLM distillation
🔎 Similar Papers
No similar papers found.