🤖 AI Summary
This work addresses the out-of-domain (OOD) item recommendation problem in large language model (LLM)-based recommender systems. To ensure strict domain adherence, we propose two complementary approaches: RecLM-ret (retrieval-augmented) and RecLM-cgen (constraint-guided generation). Our core contribution is a lightweight, recommendation-specific constrained generation mechanism that integrates domain lexicon guidance with optimized decoding strategies—preserving LLMs’ general capabilities while simultaneously enhancing domain compliance and generation accuracy. The methods are plug-and-play and incur minimal computational overhead. Evaluated on three standard recommendation benchmarks, our approach eliminates cross-domain recommendations entirely (100% reduction) and achieves up to a 12.7% improvement in recommendation accuracy, significantly outperforming existing LLM-based recommendation methods.
📝 Abstract
Large Language Models (LLMs) have shown promise for generative recommender systems due to their transformative capabilities in user interaction. However, ensuring they do not recommend out-of-domain (OOD) items remains a challenge. We study two distinct methods to address this issue: RecLM-ret, a retrieval-based method, and RecLM-cgen, a constrained generation method. Both methods integrate seamlessly with existing LLMs to ensure in-domain recommendations. Comprehensive experiments on three recommendation datasets demonstrate that RecLM-cgen consistently outperforms RecLM-ret and existing LLM-based recommender models in accuracy while eliminating OOD recommendations, making it the preferred method for adoption. Additionally, RecLM-cgen maintains strong generalist capabilities and is a lightweight plug-and-play module for easy integration into LLMs, offering valuable practical benefits for the community. Source code is available at https://github.com/microsoft/RecAI