🤖 AI Summary
Multi-behavior sequential recommendation faces dual challenges of behavioral heterogeneity and data sparsity, limiting effective dynamic interest modeling. To address this, we propose BLADE, a behavior-level data augmentation framework. BLADE introduces three novel behavior-level augmentation strategies—sequence reconstruction, behavior-aware masking, and cross-behavior contrastive learning—and designs a dual-path item-behavior fusion mechanism that jointly operates at both the input and intermediate layers to enable multi-perspective preference learning. Furthermore, it incorporates a multi-behavior graph neural network to strengthen high-order relational modeling. Extensive experiments on three real-world datasets demonstrate that BLADE significantly outperforms state-of-the-art methods, achieving an average 12.7% improvement in Recall@10. Moreover, BLADE exhibits superior robustness and generalization capability under sparse and noisy conditions.
📝 Abstract
Multi-behavior sequential recommendation aims to capture users' dynamic interests by modeling diverse types of user interactions over time. Although several studies have explored this setting, the recommendation performance remains suboptimal, mainly due to two fundamental challenges: the heterogeneity of user behaviors and data sparsity. To address these challenges, we propose BLADE, a framework that enhances multi-behavior modeling while mitigating data sparsity. Specifically, to handle behavior heterogeneity, we introduce a dual item-behavior fusion architecture that incorporates behavior information at both the input and intermediate levels, enabling preference modeling from multiple perspectives. To mitigate data sparsity, we design three behavior-level data augmentation methods that operate directly on behavior sequences rather than core item sequences. These methods generate diverse augmented views while preserving the semantic consistency of item sequences. These augmented views further enhance representation learning and generalization via contrastive learning. Experiments on three real-world datasets demonstrate the effectiveness of our approach.