Rethinking LLM-Based Recommendations: A Query Generation-Based, Training-Free Approach

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based recommendation methods face four key bottlenecks: low retrieval efficiency over large candidate pools, sensitivity to item ordering in prompts (“middle-item blindness”), poor scalability, and evaluation distortion caused by random negative sampling. This paper proposes a Query-to-Recommendation paradigm: leveraging LLMs for zero-shot generation of personalized semantic queries, followed by direct retrieval over the full candidate pool—requiring no fine-tuning, pre-filtering, or negative sampling. We introduce the first training-free, query-driven framework compatible with existing ID-based recommendation systems, effectively overcoming middle-item blindness while ensuring fair exposure for long-tail items and high recommendation diversity. Extensive experiments on three benchmark datasets demonstrate an average 31% improvement in Recall@10 (up to 57%), strong zero-shot performance, and seamless plug-and-play integration into industrial ID-based systems.

Technology Category

Application Category

📝 Abstract
Existing large language model LLM-based recommendation methods face several challenges, including inefficiency in handling large candidate pools, sensitivity to item order within prompts ("lost in the middle"phenomenon) poor scalability, and unrealistic evaluation due to random negative sampling. To address these issues, we propose a Query-to-Recommendation approach that leverages LLMs to generate personalized queries for retrieving relevant items from the entire candidate pool, eliminating the need for candidate pre-selection. This method can be integrated into an ID-based recommendation system without additional training, enhances recommendation performance and diversity through LLMs' world knowledge, and performs well even for less popular item groups. Experiments on three datasets show up to 57 percent improvement, with an average gain of 31 percent, demonstrating strong zero-shot performance and further gains when ensembled with existing models.
Problem

Research questions and friction points this paper is trying to address.

Inefficient handling of large candidate pools in LLM-based recommendations
Sensitivity to item order and poor scalability in current methods
Unrealistic evaluation due to random negative sampling issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates personalized queries for retrieval
Integrates without training into ID-based systems
Improves performance and diversity via LLMs
🔎 Similar Papers
No similar papers found.