🤖 AI Summary
This work addresses the limitations of existing implicit collaborative filtering methods, which often neglect the construction of positive sample pairs during negative sampling and suffer from user activity bias, thereby struggling to effectively capture preferences of inactive users. To overcome these issues, we propose PSP-NS, a lightweight and plug-and-play module that enhances positive supervision by constructing a user-item bipartite graph with confidence-aware weights and generating high-quality positive pairs via a copy-and-reweight strategy. Furthermore, an activity-aware weighting mechanism is introduced to mitigate the impact of activity bias. PSP-NS is model-agnostic and seamlessly integrates with mainstream implicit CF models. Extensive experiments on four real-world datasets demonstrate its effectiveness, with notable improvements on Yelp—achieving 32.11% and 22.90% relative gains in Recall@30 and Precision@30, respectively.
📝 Abstract
Most implicit collaborative filtering (CF) models are trained with negative sampling, where existing work designs sophisticated strategies for high-quality negatives while largely overlooking the exploration of positive samples. Although some denoising recommendation methods can be applied to implicit CF for denoising positive samples, they often sparsify positive supervision. Moreover, these approaches generally overlook user activity bias during training, leading to insufficient learning for inactive users. To address these issues, we propose a simple yet effective negative sampling plugin, PSP-NS, from the perspective of enhancing positive supervision signals. It builds a user-item bipartite graph with edge weights indicating interaction confidence inferred from global and local patterns, generates positive sample pairs via replication-based reweighting to strengthen positive signals, and adopts an activity-aware weighting scheme to effectively learn inactive users'preferences. We provide theoretical insights from a margin-improvement perspective, explaining why PSP-NS tends to improve ranking quality (e.g., Precision@k/Recall@k), and conduct extensive experiments on four real-world datasets to demonstrate its superiority. For instance, PSP-NS boosts Recall@30 and Precision@30 by 32.11% and 22.90% on Yelp over the strongest baselines. PSP-NS can be integrated with various implicit CF recommenders or negative sampling methods to enhance their performance.