🤖 AI Summary
This work addresses the challenge in multi-behavior recommendation where auxiliary behaviors—such as clicks or add-to-cart actions—often introduce bias into the learning of target behavior preferences (e.g., purchases) due to noise, weak relevance, or semantic misalignment. To mitigate this, the authors propose RMBRec, a novel framework that jointly enforces local semantic consistency and global optimization stability. Specifically, it enhances cross-behavior semantic alignment through mutual information maximization and improves robustness across behavioral contexts by minimizing prediction risk variance. The approach integrates these principles into two theoretically grounded components: a Representation Robustness Module (RRM) and an Optimization Robustness Module (ORM). Extensive experiments on three real-world datasets demonstrate that RMBRec significantly outperforms state-of-the-art methods, achieving superior recommendation accuracy and strong resilience to noisy perturbations.
📝 Abstract
Multi-behavior recommendation faces a critical challenge in practice: auxiliary behaviors (e.g., clicks, carts) are often noisy, weakly correlated, or semantically misaligned with the target behavior (e.g., purchase), which leads to biased preference learning and suboptimal performance. While existing methods attempt to fuse these heterogeneous signals, they inherently lack a principled mechanism to ensure robustness against such behavioral inconsistency. In this work, we propose Robust Multi-Behavior Recommendation towards Target Behaviors (RMBRec), a robust multi-behavior recommendation framework grounded in an information-theoretic robustness principle. We interpret robustness as a joint process of maximizing predictive information while minimizing its variance across heterogeneous behavioral environments. Under this perspective, the Representation Robustness Module (RRM) enhances local semantic consistency by maximizing the mutual information between users'auxiliary and target representations, whereas the Optimization Robustness Module (ORM) enforces global stability by minimizing the variance of predictive risks across behaviors, which is an efficient approximation to invariant risk minimization. This local-global collaboration bridges representation purification and optimization invariance in a theoretically coherent way. Extensive experiments on three real-world datasets demonstrate that RMBRec not only outperforms state-of-the-art methods in accuracy but also maintains remarkable stability under various noise perturbations. For reproducibility, our code is available at https://github.com/miaomiao-cai2/RMBRec/.