Discrete Scale-invariant Metric Learning for Efficient Collaborative Filtering

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address metric learning bias induced by class imbalance in collaborative filtering, this paper proposes Discrete Scale-Invariant Metric Learning (DSIML), which maps users and items into a shared Hamming space for efficient binary recommendation. Our key contributions are: (1) the first scale-invariant margin formulated from the negative-sample perspective, jointly integrating triplet hinge loss with pairwise ranking loss; (2) a log-sum-exp-based variational quadratic upper bound to tackle the mixed-integer hashing optimization problem; and (3) an alternating optimization algorithm with guaranteed convergence. Extensive experiments on multiple benchmark datasets demonstrate that DSIML significantly outperforms state-of-the-art metric learning and hashing-based recommendation methods—achieving both high accuracy and substantially improved online inference efficiency. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Metric learning has attracted extensive interest for its ability to provide personalized recommendations based on the importance of observed user-item interactions. Current metric learning methods aim to push negative items away from the corresponding users and positive items by an absolute geometrical distance margin. However, items may come from imbalanced categories with different intra-class variations. Thus, the absolute distance margin may not be ideal for estimating the difference between user preferences over imbalanced items. To this end, we propose a new method, named discrete scale-invariant metric learning (DSIML), by adding binary constraints to users and items, which maps users and items into binary codes of a shared Hamming subspace to speed up the online recommendation. Specifically, we firstly propose a scale-invariant margin based on angles at the negative item points in the shared Hamming subspace. Then, we derive a scale-invariant triple hinge loss based on the margin. To capture more preference difference information, we integrate a pairwise ranking loss into the scale-invariant loss in the proposed model. Due to the difficulty of directly optimizing the mixed integer optimization problem formulated with extit{log-sum-exp} functions, we seek to optimize its variational quadratic upper bound and learn hash codes with an alternating optimization strategy. Experiments on benchmark datasets clearly show that our proposed method is superior to competitive metric learning and hashing-based baselines for recommender systems. The implementation code is available at https://github.com/AnonyFeb/dsml.
Problem

Research questions and friction points this paper is trying to address.

Addresses imbalanced item categories in metric learning
Proposes scale-invariant margin for user preference estimation
Optimizes binary codes for efficient recommendation systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Binary constraints map users to Hamming subspace
Scale-invariant margin based on angles
Optimize variational quadratic upper bound
Y
Yan Zhang
Faculty of Science and Technology, Charles Darwin University, Darwin, Australia
Li Deng
Li Deng
Chief AI Officer, Citadel (former)
Speech and Language ProcessingDeep learningArtificial IntelligenceSignal ProcessingFinancial Engineering
Lixin Duan
Lixin Duan
Data Intelligence Group (DIG) @ UESTC
Transfer LearningDomain Adaptation
S
Sami Azam
Faculty of Science and Technology, Charles Darwin University, Darwin, Australia