Harnessing Multimodal Large Language Models for Multimodal Sequential Recommendation

📅 2024-08-19
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses multimodal sequential recommendation under dynamically evolving user preferences. We propose the first framework leveraging multimodal large language models (MLLMs) for dynamic user modeling—departing from conventional text-only LLM paradigms. Our approach comprises two stages: (1) MLLM-guided generation of joint image-text item summaries, and (2) LLM-based recursive temporal modeling of user preference evolution over these sequential summaries. To tackle cross-modal alignment and long-horizon temporal dependency challenges, we design an MLLM supervised fine-tuning (SFT) recommendation framework integrating image feature textualization, prompt engineering, and iterative preference generation. Extensive experiments on multiple public multimodal recommendation benchmarks demonstrate significant improvements over state-of-the-art methods. Notably, our method achieves superior robustness in cold-start and cross-domain scenarios, validating its generalizability and adaptability to real-world dynamic recommendation settings.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have demonstrated significant potential in the field of Recommendation Systems (RSs). Most existing studies have focused on converting user behavior logs into textual prompts and leveraging techniques such as prompt tuning to enable LLMs for recommendation tasks. Meanwhile, research interest has recently grown in multimodal recommendation systems that integrate data from images, text, and other sources using modality fusion techniques. This introduces new challenges to the existing LLM-based recommendation paradigm which relies solely on text modality information. Moreover, although Multimodal Large Language Models (MLLMs) capable of processing multi-modal inputs have emerged, how to equip MLLMs with multi-modal recommendation capabilities remains largely unexplored. To this end, in this paper, we propose the Multimodal Large Language Model-enhanced Multimodaln Sequential Recommendation (MLLM-MSR) model. To capture the dynamic user preference, we design a two-stage user preference summarization method. Specifically, we first utilize an MLLM-based item-summarizer to extract image feature given an item and convert the image into text. Then, we employ a recurrent user preference summarization generation paradigm to capture the dynamic changes in user preferences based on an LLM-based user-summarizer. Finally, to enable the MLLM for multi-modal recommendation task, we propose to fine-tune a MLLM-based recommender using Supervised Fine-Tuning (SFT) techniques. Extensive evaluations across various datasets validate the effectiveness of MLLM-MSR, showcasing its superior ability to capture and adapt to the evolving dynamics of user preferences.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Language Models
Recommendation Systems
User Preference Dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Recommendation System
Sequential Feature Tracking
SFT Fine-tuning
🔎 Similar Papers
No similar papers found.
Y
Yuyang Ye
Rutgers University
Z
Zhi Zheng
University of Science and Technology of China
Yishan Shen
Yishan Shen
University of Pennsylvania
Federated learningCausal inferenceBias correction
T
Tianshu Wang
Bytedance Inc.
H
He Zhang
Bytedance Inc.
P
Peijun Zhu
Georgia Institute of Technology
Runlong Yu
Runlong Yu
University of Alabama/University of Pittsburgh
AI for ScienceData MiningMachine LearningWaterGeoAI
K
Kai Zhang
University of Science and Technology of China
Hui Xiong
Hui Xiong
Senior Scientist, Candela Corporation
Ultrafast dynamicsatomic molecular physicsfree electron laser