🤖 AI Summary
Recommendation systems suffer from exposure bias, causing observed user feedback to poorly reflect true preferences. While existing debiasing methods perform well under counterfactual evaluation, their performance degrades significantly under factual (i.e., real-world interaction) settings. To address this, we propose Bias-adaptive Preference distillation Learning (BPL), a dual-path knowledge distillation framework. First, a bias-aware teacher-student distillation preserves explicit feedback knowledge from factual interactions. Second, a reliability-filtered self-distillation mechanism iteratively refines latent preference estimation. Crucially, BPL is the first method to jointly model and co-optimize recommendation performance in both factual and counterfactual environments. Extensive experiments demonstrate that BPL consistently outperforms state-of-the-art debiasing approaches across both evaluation paradigms, achieving superior short-term behavioral prediction accuracy while also improving long-term user satisfaction. The code is publicly available.
📝 Abstract
Recommender systems suffer from biases that cause the collected feedback to incompletely reveal user preference. While debiasing learning has been extensively studied, they mostly focused on the specialized (called counterfactual) test environment simulated by random exposure of items, significantly degrading accuracy in the typical (called factual) test environment based on actual user-item interactions. In fact, each test environment highlights the benefit of a different aspect: the counterfactual test emphasizes user satisfaction in the long-terms, while the factual test focuses on predicting subsequent user behaviors on platforms. Therefore, it is desirable to have a model that performs well on both tests rather than only one. In this work, we introduce a new learning framework, called Bias-adaptive Preference distillation Learning (BPL), to gradually uncover user preferences with dual distillation strategies. These distillation strategies are designed to drive high performance in both factual and counterfactual test environments. Employing a specialized form of teacher-student distillation from a biased model, BPL retains accurate preference knowledge aligned with the collected feedback, leading to high performance in the factual test. Furthermore, through self-distillation with reliability filtering, BPL iteratively refines its knowledge throughout the training process. This enables the model to produce more accurate predictions across a broader range of user-item combinations, thereby improving performance in the counterfactual test. Comprehensive experiments validate the effectiveness of BPL in both factual and counterfactual tests. Our implementation is accessible via: https://github.com/SeongKu-Kang/BPL.