🤖 AI Summary
This paper investigates whether weighted matrix factorization (WMF) improves recommendation performance in implicit-feedback settings. Through systematic analysis of the coupling effects among weighting schemes, model capacity, and regularization, we find that unweighted training achieves performance comparable to—or even surpassing—that of state-of-the-art weighted methods under high-capacity models, challenging the conventional assumption that weighting universally enhances performance. To address the computational difficulty of exact optimization for classical weighted objectives (e.g., WALS), we propose a novel, efficient, and provably exact optimization algorithm—the first to minimize such non-convex, non-smooth weighted objectives without approximation. Extensive experiments across diverse MF architectures and weighting strategies on multiple benchmark datasets confirm our findings: weighting yields gains only under low-capacity models or strong regularization, whereas simplified unweighted training is more robust and efficient for large-scale models. Our core contributions are (i) identifying precise boundary conditions under which weighting is beneficial, (ii) providing theoretical justification, and (iii) delivering a practical, exact optimization tool.
📝 Abstract
Matrix factorization is a widely used approach for top-N recommendation and collaborative filtering. When implemented on implicit feedback data (such as clicks), a common heuristic is to upweight the observed interactions. This strategy has been shown to improve performance for certain algorithms. In this paper, we conduct a systematic study of various weighting schemes and matrix factorization algorithms. Somewhat surprisingly, we find that training with unweighted data can perform comparably to, and sometimes outperform, training with weighted data, especially for large models. This observation challenges the conventional wisdom. Nevertheless, we identify cases where weighting can be beneficial, particularly for models with lower capacity and specific regularization schemes. We also derive efficient algorithms for exactly minimizing several weighted objectives that were previously considered computationally intractable. Our work provides a comprehensive analysis of the interplay between weighting, regularization, and model capacity in matrix factorization for recommender systems.