๐ค AI Summary
This work addresses the limitations of embeddings generated by general-purpose large language models (LLMs) in recommendation systems, which often lack structural coherence, discriminative power, and domain-specific semantics. To bridge this gap, the authors propose an embedding adaptation mechanism tailored for recommendation tasks: while keeping the pre-trained language model parameters frozen, only compact yet informative item description embeddings are optimized to align with the recommendation objective. This approach effectively enhances both domain relevance and discriminability of the embeddings, while naturally supporting sequential recommendation and semantic ID tokenization. Extensive experiments demonstrate that the proposed method significantly outperforms existing pre-trained language modelโbased and embedding baselines across multiple recommendation benchmarks, underscoring the critical role of embedding adaptation in effectively integrating general-purpose LLMs into recommendation systems.
๐ Abstract
Recent recommender systems increasingly leverage embeddings from large pre-trained language models (PLMs). However, such embeddings exhibit two key limitations: (1) PLMs are not explicitly optimized to produce structured and discriminative embedding spaces, and (2) their representations remain overly generic, often failing to capture the domain-specific semantics crucial for recommendation tasks. We present EncodeRec, an approach designed to align textual representations with recommendation objectives while learning compact, informative embeddings directly from item descriptions. EncodeRec keeps the language model parameters frozen during recommender system training, making it computationally efficient without sacrificing semantic fidelity. Experiments across core recommendation benchmarks demonstrate its effectiveness both as a backbone for sequential recommendation models and for semantic ID tokenization, showing substantial gains over PLM-based and embedding model baselines. These results underscore the pivotal role of embedding adaptation in bridging the gap between general-purpose language models and practical recommender systems.