Aligning Multimodal Sequential Recommendations via Robust Direct Preference Optimization with Sparse MoE

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of noisy negative samples in implicit feedback scenarios, where unobserved items are commonly treated as negatives, thereby degrading the performance of Direct Preference Optimization (DPO). To mitigate this issue, the authors propose a stochastic hard negative sampling strategy within a dynamic Top-K candidate pool, replacing deterministic hard negatives. This approach preserves informative hard negatives while alleviating error-prone gradient suppression caused by false negatives and yields a smoother optimization landscape. The method integrates a Sparse Mixture-of-Experts (Sparse MoE) encoder with multimodal sequential modeling to enable efficient capacity scaling and robust preference learning. Experimental results demonstrate consistent improvements across three Amazon benchmark datasets, achieving up to a 5.25% relative gain in NDCG@5 with negligible increase in inference overhead.
📝 Abstract
Preference-based alignment objectives have been widely adopted, from RLHF-style pairwise learning in large language models to emerging applications in recommender systems. Yet, existing work rarely examines how Direct Preference Optimization (DPO) behaves under implicit feedback, where unobserved items are not reliable negatives. We conduct systematic experiments on multimodal sequential recommendation to compare common negative-selection strategies and their interaction with DPO training. Our central finding is that a simple modification, replacing deterministic hard negatives with stochastic sampling from a dynamic top-K candidate pool, consistently improves ranking performance. We attribute its effectiveness to two factors: (1) reducing erroneous suppressive gradients caused by false negatives, and (2) retaining informative hard signals while smoothing optimization via controlled stochasticity. With an optional sparse Mixture-of-Experts encoder for efficient capacity scaling, RoDPO achieves up to 5.25% NDCG@5 on three Amazon benchmarks, with nearly unchanged inference cost.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Sequential Recommendation
Implicit Feedback
Direct Preference Optimization
Negative Sampling
Preference Alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct Preference Optimization
Implicit Feedback
Stochastic Negative Sampling
Sparse Mixture-of-Experts
Multimodal Sequential Recommendation
🔎 Similar Papers
H
Hejin Huang
Sun Yat-sen University
J
Jusheng Zhang
Sun Yat-sen University
K
Kaitong Cai
Sun Yat-sen University
Jian Wang
Jian Wang
Snap Inc.
Computer visionsignal processing
R
Rong Pan
Sun Yat-sen University