MLLMRec-R1: Incentivizing Reasoning Capability in Large Language Models for Multimodal Sequential Recommendation

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in multimodal sequential recommendation where existing GRPO-based reasoning methods suffer from the high computational cost of visual tokens and reward inflation caused by Chain-of-Thought (CoT) supervision. To overcome these limitations, the authors propose the MLLMRec-R1 framework, which first converts visual signals into textual representations offline to eliminate expensive visual token computation. They further design confidence-aware, high-quality multimodal CoT supervision signals and introduce a mixed-granularity data augmentation strategy to enhance training stability. MLLMRec-R1 is the first framework to enable efficient and scalable GRPO-based multimodal reasoning, achieving significant performance gains over state-of-the-art methods across three benchmark datasets, thereby demonstrating its effectiveness and practicality.

Technology Category

Application Category

📝 Abstract
Group relative policy optimization (GRPO) has become a standard post-training paradigm for improving reasoning and preference alignment in large language models (LLMs), and has recently shown strong effectiveness in LLM-based recommender systems. However, extending GRPO-based reasoning pipelines to multimodal sequential recommendation (MSR) with multimodal large language models (MLLMs) faces fundamental obstacles. First, MSR requires jointly encoding visual content for both historical interactions and multiple candidate items, causing visual tokens to dominate the input and making the cost of group-based rollout scale with history length and candidate set size, which renders GRPO-based training prohibitively expensive. Second, existing Chain-of-Thought (CoT) supervision suffers from reward inflation in recommendation scenarios, where higher training rewards do not reliably translate into improved ranking performance and may induce shortcut learning. To address these challenges, we propose MLLMRec-R1, an efficient and stable GRPO-based reasoning framework for multimodal sequential recommendation. MLLMRec-R1 textualizes visual signals offline to eliminate expensive visual tokens while preserving multimodal semantics, and constructs high-quality multimodal CoT supervision through refinement and confidence-aware assessment. Furthermore, a mixed-grained data augmentation strategy selectively injects reliable CoT samples while retaining standard training data, mitigating reward inflation and improving generalization stability. Extensive experiments on three benchmark datasets demonstrate that MLLMRec-R1 consistently outperforms state-of-the-art methods, establishing a practical and effective GRPO-based reasoning pipeline for multimodal sequential recommendation. The code is available at https://github.com/wangyu0627/MLLMRec-R1.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Sequential Recommendation
Group Relative Policy Optimization
Reward Inflation
Visual Token Efficiency
Chain-of-Thought Supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Sequential Recommendation
Group Relative Policy Optimization
Chain-of-Thought Reasoning
Visual Token Textualization
Reward Inflation Mitigation
🔎 Similar Papers
No similar papers found.