🤖 AI Summary
To address the weak interpretability and insufficient modeling of numerical features in recommender systems, this paper proposes CCSS, a model-agnostic contrastive monotonicity learning framework. CCSS constructs explicit monotonicity constraints between numerical features and model outputs by generating semantically consistent counterfactual samples, and optimizes these constraints end-to-end via contrastive learning. Crucially, CCSS requires no modification to the underlying recommendation model and supports plug-and-play integration. Extensive experiments on multiple public and industrial-scale datasets demonstrate that CCSS significantly improves both recommendation accuracy (e.g., +0.8% AUC) and interpretability (e.g., +23% monotonicity compliance rate). The framework has been deployed in a large-scale production recommender system, serving over 100 million users.
📝 Abstract
We propose a general model-agnostic Contrastive learning framework with Counterfactual Samples Synthesizing (CCSS) for modeling the monotonicity between the neural network output and numerical features which is critical for interpretability and effectiveness of recommender systems. CCSS models the monotonicity via a two-stage process: synthesizing counterfactual samples and contrasting the counterfactual samples. The two techniques are naturally integrated into a model-agnostic framework, forming an end-to-end training process. Abundant empirical tests are conducted on a publicly available dataset and a real industrial dataset, and the results well demonstrate the effectiveness of our proposed CCSS. Besides, CCSS has been deployed in our real large-scale industrial recommender, successfully serving over hundreds of millions users.