🤖 AI Summary
To address hallucination, redundancy, and semantic homogenization in large language model (LLM)-generated knowledge for recommendation systems, this paper proposes the Knowledge Selection and Enhancement Recommendation framework (KSER). Methodologically, KSER introduces an Embedding Selection Filtering Network (ESFNet) for adaptive knowledge filtering and employs an attention mechanism to align the LLM’s semantic space with the recommendation feature space. Crucially, it adopts a “train-only-the-extractor” paradigm, decoupling knowledge extraction from downstream recommendation modeling. Experiments across multiple recommendation scenarios demonstrate that KSER significantly improves recommendation performance. ESFNet effectively suppresses noisy or irrelevant knowledge, while the train-only-the-extractor strategy achieves both computational efficiency and strong generalization—establishing a novel lightweight paradigm for knowledge-enhanced recommendation.
📝 Abstract
In recent years, there has been growing interest in leveraging the impressive generalization capabilities and reasoning ability of large language models (LLMs) to improve the performance of recommenders. With this operation, recommenders can access and learn the additional world knowledge and reasoning information via LLMs. However, in general, for different users and items, the world knowledge derived from LLMs suffers from issues of hallucination, content redundant, and information homogenization. Directly feeding the generated response embeddings into the recommendation model can lead to unavoidable performance deterioration. To address these challenges, we propose a Knowledge Selection & Exploitation Recommendation (KSER) framework, which effectively select and extracts the high-quality knowledge from LLMs. The framework consists of two key components: a knowledge filtering module and a embedding spaces alignment module. In the knowledge filtering module, a Embedding Selection Filter Network (ESFNet) is designed to assign adaptive weights to different knowledge chunks in different knowledge fields. In the space alignment module, an attention-based architecture is proposed to align the semantic embeddings from LLMs with the feature space used to train the recommendation models. In addition, two training strategies-- extbf{all-parameters training} and extbf{extractor-only training}--are proposed to flexibly adapt to different downstream tasks and application scenarios, where the extractor-only training strategy offers a novel perspective on knowledge-augmented recommendation. Experimental results validate the necessity and effectiveness of both the knowledge filtering and alignment modules, and further demonstrate the efficiency and effectiveness of the extractor-only training strategy.