STAR: A Simple Training-free Approach for Recommendations using Large Language Models

📅 2024-10-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based recommendation methods rely on fine-tuning—entailing high computational cost and engineering complexity—while zero-shot approaches suffer significant performance degradation due to neglecting collaborative signals. This paper proposes STAR, a training-free framework that achieves end-to-end, zero-fine-tuning recommendation. STAR integrates LLM-derived semantic embeddings with explicit collaborative filtering signals during retrieval, and leverages the LLM for parameter-free pairwise ranking in the subsequent stage. Its core innovation lies in the first explicit injection of collaborative information into the LLM retrieval process, thereby establishing a novel paradigm for zero-fine-tuning LLM-based recommendation. Evaluated on Amazon Beauty and Toys & Games datasets, STAR achieves competitive performance using retrieval alone: Hits@10 improves by 23.8% and 37.5%, respectively, over the best supervised baseline—without any model training or parameter updates.

Technology Category

Application Category

📝 Abstract
Recent progress in large language models (LLMs) offers promising new approaches for recommendation system tasks. While the current state-of-the-art methods rely on fine-tuning LLMs to achieve optimal results, this process is costly and introduces significant engineering complexities. Conversely, methods that directly use LLMs without additional fine-tuning result in a large drop in recommendation quality, often due to the inability to capture collaborative information. In this paper, we propose a Simple Training-free Approach for Recommendation (STAR), a framework that utilizes LLMs and can be applied to various recommendation tasks without the need for fine-tuning, while maintaining high quality recommendation performance. Our approach involves a retrieval stage that uses semantic embeddings from LLMs combined with collaborative user information to retrieve candidate items. We then apply an LLM for pairwise ranking to enhance next-item prediction. Experimental results on the Amazon Review dataset show competitive performance for next item prediction, even with our retrieval stage alone. Our full method achieves Hits@10 performance of +23.8% on Beauty, +37.5% on Toys&Games, and -1.8% on Sports&Outdoors relative to the best supervised models. This framework offers an effective alternative to traditional supervised models, highlighting the potential of LLMs in recommendation systems without extensive training or custom architectures.
Problem

Research questions and friction points this paper is trying to address.

Training-free recommendation approach
Utilizes LLMs without fine-tuning
Maintains high recommendation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free LLM-based recommendation
Semantic embeddings with collaborative info
Pairwise ranking for next-item prediction
🔎 Similar Papers
No similar papers found.
D
Dong-Ho Lee
University of Southern California, Los Angeles, California, USA
A
Adam Kraft
Google DeepMind, Mountain View, California, USA
L
Long Jin
Google, Mountain View, California, USA
Nikhil Mehta
Nikhil Mehta
Google DeepMind
Deep LearningContinual Online LearningBayesian Neural Networks
T
Taibai Xu
Google, Mountain View, California, USA
Lichan Hong
Lichan Hong
Google DeepMind
Recommendation SystemLLMDeep LearningSocial ComputingVisualization
E
E. Chi
Google DeepMind, Mountain View, California, USA
Xinyang Yi
Xinyang Yi
Google DeepMind
Machine LearningLLMsRecommendations