AsarRec: Adaptive Sequential Augmentation for Robust Self-supervised Sequential Recommendation

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world user behavior sequences often contain noise, degrading the performance of sequential recommendation models. Existing self-supervised approaches rely on static data augmentation strategies, which lack adaptability across diverse scenarios and may impair model performance when applied inappropriately. To address this, we propose an adaptive sequence augmentation framework that introduces, for the first time, a dynamic augmentation mechanism grounded in probabilistic transition matrix modeling and differentiable Semi-Sinkhorn projection. Our method jointly optimizes three objectives: augmentation diversity, semantic invariance, and information preservation. Crucially, it abandons predefined augmentation paradigms and enables end-to-end learnable, robust data augmentation. Extensive experiments on three benchmark datasets under various noise settings demonstrate that our approach achieves state-of-the-art accuracy and robustness, validating both the effectiveness and generalizability of adaptive augmentation in sequential recommendation.

Technology Category

Application Category

📝 Abstract
Sequential recommender systems have demonstrated strong capabilities in modeling users' dynamic preferences and capturing item transition patterns. However, real-world user behaviors are often noisy due to factors such as human errors, uncertainty, and behavioral ambiguity, which can lead to degraded recommendation performance. To address this issue, recent approaches widely adopt self-supervised learning (SSL), particularly contrastive learning, by generating perturbed views of user interaction sequences and maximizing their mutual information to improve model robustness. However, these methods heavily rely on their pre-defined static augmentation strategies~(where the augmentation type remains fixed once chosen) to construct augmented views, leading to two critical challenges: (1) the optimal augmentation type can vary significantly across different scenarios; (2) inappropriate augmentations may even degrade recommendation performance, limiting the effectiveness of SSL. To overcome these limitations, we propose an adaptive augmentation framework. We first unify existing basic augmentation operations into a unified formulation via structured transformation matrices. Building on this, we introduce AsarRec (Adaptive Sequential Augmentation for Robust Sequential Recommendation), which learns to generate transformation matrices by encoding user sequences into probabilistic transition matrices and projecting them into hard semi-doubly stochastic matrices via a differentiable Semi-Sinkhorn algorithm. To ensure that the learned augmentations benefit downstream performance, we jointly optimize three objectives: diversity, semantic invariance, and informativeness. Extensive experiments on three benchmark datasets under varying noise levels validate the effectiveness of AsarRec, demonstrating its superior robustness and consistent improvements.
Problem

Research questions and friction points this paper is trying to address.

Addresses noise in user behavior data affecting recommendation accuracy.
Overcomes limitations of static augmentation in self-supervised learning.
Proposes adaptive augmentation to enhance robustness in sequential recommendation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive augmentation via probabilistic transition matrices
Differentiable Semi-Sinkhorn algorithm for hard matrices
Joint optimization of diversity, invariance, and informativeness
🔎 Similar Papers
No similar papers found.
Kaike Zhang
Kaike Zhang
Institute of Computing Technology, Chinese Academy of Sciences
Trustworthy Graph Data Mining & Representation LearningRobust Recommender System
Q
Qi Cao
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
F
Fei Sun
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Xinran Liu
Xinran Liu
Ph.D. candidate, Vanderbilt University
optimal transportmachine learning