🤖 AI Summary
To address insufficient scene knowledge fusion, weak cross-scene personalized preference modeling, and the industrial deployment challenges of large language models (LLMs)—namely high latency and computational cost—this paper proposes a fine-tuning-free, LLM-driven enhancement paradigm for multi-scenario recommendation (MSR). We innovatively design scene-level and user-level zero-shot prompt engineering to extract multi-granularity semantic knowledge, and introduce a hierarchical meta-network that explicitly decouples scene-aware capability from personalized recommendation capability. The method is compatible with mainstream MSR backbone models and achieves significant improvements in Recall@10 and NDCG@10 on KuaiSAR-small, KuaiSAR, and Amazon datasets. It enables efficient inference suitable for industrial-scale deployment and enhances recommendation interpretability through transparent, knowledge-grounded prompting.
📝 Abstract
As the demand for more personalized recommendation grows and a dramatic boom in commercial scenarios arises, the study on multi-scenario recommendation (MSR) has attracted much attention, which uses the data from all scenarios to simultaneously improve their recommendation performance. However, existing methods tend to integrate insufficient scenario knowledge and neglect learning personalized cross-scenario preferences, thus leading to sub-optimal performance. Meanwhile, though large language model (LLM) has shown great capability of reasoning and capturing semantic information, the high inference latency and high computation cost of tuning hinder its implementation in industrial recommender systems. To fill these gaps, we propose an LLM-enhanced paradigm LLM4MSR in this work. Specifically, we first leverage LLM to uncover multi-level knowledge from the designed scenario- and user-level prompt without fine-tuning the LLM, then adopt hierarchical meta networks to generate multi-level meta layers to explicitly improve the scenario-aware and personalized recommendation capability. Our experiments on KuaiSAR-small, KuaiSAR, and Amazon datasets validate significant advantages of LLM4MSR: (i) the effectiveness and compatibility with different multi-scenario backbone models, (ii) high efficiency and deployability on industrial recommender systems, and (iii) improved interpretability. The implemented code and data is available to ease reproduction.