A Remarkably Efficient Paradigm to Multimodal Large Language Models for Sequential Recommendation

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses three key bottlenecks in multimodal large language models (MLLMs) for sequential recommendation: (1) suboptimal item representation due to redundant textual descriptions; (2) modality-cognitive bias induced by text-centric pretraining, hindering effective non-textual modality integration; and (3) degraded modeling of early interactions in long user sequences caused by standard attention mechanisms. To tackle these challenges, we propose Speeder—a novel, efficient framework integrating three innovations: multimodal representation compression, modality-aware progressive fusion, and sequence-position-aware enhanced attention. Evaluated on real-world datasets, Speeder achieves a 150% training speedup (2.5× faster than baseline) and reduces inference latency by 75%, significantly outperforming state-of-the-art methods while maintaining competitive recommendation accuracy. The framework thus bridges the efficiency–effectiveness trade-off in multimodal sequential recommendation.

Technology Category

Application Category

📝 Abstract
Sequential recommendations (SR) predict users'future interactions based on their historical behavior. The rise of Large Language Models (LLMs) has brought powerful generative and reasoning capabilities, significantly enhancing SR performance, while Multimodal LLMs (MLLMs) further extend this by introducing data like images and interactive relationships. However, critical issues remain, i.e., (a) Suboptimal item representations caused by lengthy and redundant descriptions, leading to inefficiencies in both training and inference; (b) Modality-related cognitive bias, as LLMs are predominantly pretrained on textual data, limiting their ability to effectively integrate and utilize non-textual modalities; (c) Weakening sequential perception in long interaction sequences, where attention mechanisms struggle to capture earlier interactions, hindering the modeling of long-range dependencies. To address these issues, we propose Speeder, an efficient MLLM-based paradigm for SR featuring three key innovations: 1) Multimodal Representation Compression (MRC), which condenses item attributes into concise yet informative tokens, reducing redundancy and computational cost; 2) Modality-aware Progressive Optimization (MPO), enabling gradual learning of multimodal representations; 3) Sequential Position Awareness Enhancement (SPAE), improving the LLM's capability to capture both relative and absolute sequential dependencies in long interaction sequences. Extensive experiments on real-world datasets demonstrate the effectiveness and efficiency of Speeder. Speeder increases training speed to 250% of the original while reducing inference time to 25% on the Amazon dataset.
Problem

Research questions and friction points this paper is trying to address.

Optimizing lengthy item descriptions to reduce computational inefficiency
Addressing modality bias in LLMs for better multimodal integration
Enhancing sequential perception in long user interaction sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compresses multimodal item attributes into concise tokens
Enables gradual learning of multimodal representations progressively
Enhances sequential position awareness for long interaction sequences
🔎 Similar Papers
2024-08-08International Workshop on Semantic and Social Media Adaptation and PersonalizationCitations: 13