Distilling Transitional Pattern to Large Language Models for Multimodal Session-based Recommendation

📅 2025-04-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data sparsity and cold-start challenges in multimodal sequential recommendation (MSBR), this paper proposes a unified representation learning framework leveraging the semantic reasoning capabilities of large language models (LLMs). Methodologically, it introduces the novel “transitional mode distillation” paradigm: cross-modal feature alignment guided by mutual information decouples and jointly optimizes knowledge-aware and transfer-aware pathways within a multimodal LLM (MLLM); a parallel dual-encoder architecture—comprising dedicated knowledge and transfer encoders—ensures inter-modal distribution consistency and semantic injection. Extensive experiments on multiple real-world datasets demonstrate that the proposed method significantly outperforms state-of-the-art approaches, achieving average improvements of 12.6% in Recall@20 and 9.8% in MRR. These results validate the effectiveness and generalizability of transitional mode injection for multimodal sequential modeling.

Technology Category

Application Category

📝 Abstract
Session-based recommendation (SBR) predicts the next item based on anonymous sessions. Traditional SBR explores user intents based on ID collaborations or auxiliary content. To further alleviate data sparsity and cold-start issues, recent Multimodal SBR (MSBR) methods utilize simplistic pre-trained models for modality learning but have limitations in semantic richness. Considering semantic reasoning abilities of Large Language Models (LLM), we focus on the LLM-enhanced MSBR scenario in this paper, which leverages LLM cognition for comprehensive multimodal representation generation, to enhance downstream MSBR. Tackling this problem faces two challenges: i) how to obtain LLM cognition on both transitional patterns and inherent multimodal knowledge, ii) how to align both features into one unified LLM, minimize discrepancy while maximizing representation utility. To this end, we propose a multimodal LLM-enhanced framework TPAD, which extends a distillation paradigm to decouple and align transitional patterns for promoting MSBR. TPAD establishes parallel Knowledge-MLLM and Transfer-MLLM, where the former interprets item knowledge-reflected features and the latter extracts transition-aware features underneath sessions. A transitional pattern alignment module harnessing mutual information estimation theory unites two MLLMs, alleviating distribution discrepancy and distilling transitional patterns into modal representations. Extensive experiments on real-world datasets demonstrate the effectiveness of our framework.
Problem

Research questions and friction points this paper is trying to address.

Enhance multimodal session-based recommendation using LLMs
Align transitional patterns and multimodal knowledge in LLMs
Address data sparsity and cold-start issues in MSBR
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-enhanced multimodal representation generation
Parallel Knowledge-MLLM and Transfer-MLLM
Transitional pattern alignment via mutual information
🔎 Similar Papers
No similar papers found.