SemSR: Semantics aware robust Session-based Recommendations

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing session-based recommendation (SBR) models struggle to effectively capture semantic information from item titles/descriptions, leading to inaccurate intent modeling and poor interpretability; conversely, pure large language model (LLM)-based approaches suffer from complex prompt engineering, lack of task-specific feedback, and high computational overhead. To address these limitations, we propose a lightweight LLM-data co-driven framework comprising three synergistic strategies: semantic initialization, in-context learning–enabled LLM agents, and joint training with a deep recommendation model—all without full-parameter fine-tuning. Our approach enhances semantic understanding while preserving efficiency and transparency. Extensive experiments on multiple real-world datasets demonstrate significant improvements in Recall and Mean Reciprocal Rank (MRR), achieving strong performance in both coarse-grained candidate retrieval and fine-grained ranking. The framework is computationally efficient, highly interpretable, and exhibits robust generalization across diverse domains and session lengths.

Technology Category

Application Category

📝 Abstract
Session-based recommendation (SR) models aim to recommend items to anonymous users based on their behavior during the current session. While various SR models in the literature utilize item sequences to predict the next item, they often fail to leverage semantic information from item titles or descriptions impeding session intent identification and interpretability. Recent research has explored Large Language Models (LLMs) as promising approaches to enhance session-based recommendations, with both prompt-based and fine-tuning based methods being widely investigated. However, prompt-based methods struggle to identify optimal prompts that elicit correct reasoning and lack task-specific feedback at test time, resulting in sub-optimal recommendations. Fine-tuning methods incorporate domain-specific knowledge but incur significant computational costs for implementation and maintenance. In this paper, we present multiple approaches to utilize LLMs for session-based recommendation: (i) in-context LLMs as recommendation agents, (ii) LLM-generated representations for semantic initialization of deep learning SR models, and (iii) integration of LLMs with data-driven SR models. Through comprehensive experiments on two real-world publicly available datasets, we demonstrate that LLM-based methods excel at coarse-level retrieval (high recall values), while traditional data-driven techniques perform well at fine-grained ranking (high Mean Reciprocal Rank values). Furthermore, the integration of LLMs with data-driven SR models significantly out performs both standalone LLM approaches and data-driven deep learning models, as well as baseline SR models, in terms of both Recall and MRR metrics.
Problem

Research questions and friction points this paper is trying to address.

Leveraging semantic information from item titles for session-based recommendations
Addressing sub-optimal performance of prompt-based LLM methods in recommendations
Reducing computational costs while maintaining recommendation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs as recommendation agents
LLM-generated semantic initialization
Integration with data-driven SR models
🔎 Similar Papers
No similar papers found.