User Invariant Preference Learning for Multi-Behavior Recommendation

📅 2025-07-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-behavior recommendation methods commonly assume that all user behaviors share a unified preference representation, overlooking the intrinsic coexistence of shared (invariant) and behavior-specific (variant) preferences—where the latter often introduce noise that degrades target-behavior prediction. Method: We propose the User-Invariant Preference Learning (UIPL) framework, the first to incorporate Invariant Risk Minimization (IRM) into multi-behavior recommendation. UIPL constructs heterogeneous behavioral environments based on behavior heterogeneity and integrates a variational autoencoder to explicitly disentangle stable (shared) preferences from dynamic (behavior-specific) ones across behaviors, thereby suppressing noise from non-target behaviors. Contribution/Results: Extensive experiments on four real-world datasets demonstrate that UIPL significantly improves both accuracy and robustness in target-behavior prediction, validating the effectiveness of invariant preference modeling for multi-behavior recommendation.

Technology Category

Application Category

📝 Abstract
In multi-behavior recommendation scenarios, analyzing users' diverse behaviors, such as click, purchase, and rating, enables a more comprehensive understanding of their interests, facilitating personalized and accurate recommendations. A fundamental assumption of multi-behavior recommendation methods is the existence of shared user preferences across behaviors, representing users' intrinsic interests. Based on this assumption, existing approaches aim to integrate information from various behaviors to enrich user representations. However, they often overlook the presence of both commonalities and individualities in users' multi-behavior preferences. These individualities reflect distinct aspects of preferences captured by different behaviors, where certain auxiliary behaviors may introduce noise, hindering the prediction of the target behavior. To address this issue, we propose a user invariant preference learning for multi-behavior recommendation (UIPL for short), aiming to capture users' intrinsic interests (referred to as invariant preferences) from multi-behavior interactions to mitigate the introduction of noise. Specifically, UIPL leverages the paradigm of invariant risk minimization to learn invariant preferences. To implement this, we employ a variational autoencoder (VAE) to extract users' invariant preferences, replacing the standard reconstruction loss with an invariant risk minimization constraint. Additionally, we construct distinct environments by combining multi-behavior data to enhance robustness in learning these preferences. Finally, the learned invariant preferences are used to provide recommendations for the target behavior. Extensive experiments on four real-world datasets demonstrate that UIPL significantly outperforms current state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Identify shared user preferences across diverse behaviors
Mitigate noise from auxiliary behaviors in recommendations
Learn invariant user preferences using multi-behavior data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses invariant risk minimization for preference learning
Employs variational autoencoder to extract invariant preferences
Constructs distinct environments from multi-behavior data
🔎 Similar Papers
No similar papers found.
M
Mingshi Yan
Tianjin University, China
Zhiyong Cheng
Zhiyong Cheng
University of Florida
autophagymitochondriaepigeneticsobesitydiabetes
F
Fan Liu
National University of Singapore, Singapore
Y
Yingda Lyu
Jilin University, China
Yahong Han
Yahong Han
Professor of Computer Science, Tianjin University
Multimedia