Ensuring User-side Fairness in Dynamic Recommender Systems

📅 2023-08-29
🏛️ The Web Conference
📈 Citations: 15
Influential: 1
📄 PDF
🤖 AI Summary
In dynamic recommendation, continual model updates often exacerbate performance disparities across user demographic groups (e.g., gender, age), yet this issue remains systematically unexplored. This paper presents the first systematic study of user-side group fairness in dynamic recommendation settings. We propose FADE, an end-to-end fine-tuning framework featuring: (i) an incremental fine-tuning strategy with periodic restarts to curb bias accumulation; (ii) a differentiable Hit (DH) metric replacing non-differentiable ranking objectives to mitigate gradient vanishing and improve optimization efficiency; and (iii) a multi-objective competitive loss coupled with a theoretically grounded update criterion. Evaluated on multiple real-world datasets, FADE significantly reduces inter-group performance gaps—by 32.7% on average—while preserving near-optimal recommendation accuracy and outperforming baselines such as NeuralNDCG in training efficiency.
📝 Abstract
User-side group fairness is crucial for modern recommender systems, alleviating performance disparities among user groups defined by sensitive attributes like gender, race, or age. In the everevolving landscape of user-item interactions, continual adaptation to newly collected data is crucial for recommender systems to stay aligned with the latest user preferences. However, we observe that such continual adaptation often worsen performance disparities. This necessitates a thorough investigation into user-side fairness in dynamic recommender systems. This problem is challenging due to distribution shifts, frequent model updates, and nondifferentiability of ranking metrics. To our knowledge, this paper presents the first principled study on ensuring user-side fairness in dynamic recommender systems. We start with theoretical analyses on fine-tuning v.s. retraining, showing that the best practice is incremental fine-tuning with restart. Guided by our theoretical analyses, we propose FAir Dynamic rEcommender (FADE), an end-to-end fine-tuning framework to dynamically ensure user-side fairness over time. To overcome the non-differentiability of recommendation metrics in the fairness loss, we further introduce Differentiable Hit (DH) as an improvement over the recent NeuralNDCG method, not only alleviating its gradient vanishing issue but also achieving higher efficiency. Besides that, we also address the instability issue of the fairness loss by leveraging the competing nature between the recommendation loss and the fairness loss. Through extensive experiments on real-world datasets, we demonstrate that FADE effectively and efficiently reduces performance disparities with little sacrifice in the overall recommendation performance.
Problem

Research questions and friction points this paper is trying to address.

Ensuring fairness in dynamic recommender systems for user groups
Addressing performance disparities exacerbated by continual adaptation
Overcoming non-differentiability and instability in fairness metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incremental fine-tuning with restart
Differentiable Hit for fairness loss
Competing nature for loss stability
🔎 Similar Papers
No similar papers found.
Hyunsik Yoo
Hyunsik Yoo
University of Illinois Urbana-Champaign
data miningmachine learningrecommender systemsalgorithmic fairness
Z
Zhichen Zeng
University of Illinois Urbana-Champaign
J
Jian Kang
University of Rochester
Z
Zhining Liu
University of Illinois Urbana-Champaign
D
David Zhou
University of Illinois Urbana-Champaign
F
Fei Wang
Amazon.com, Inc.
E
Eunice Chan
University of Illinois Urbana-Champaign
H
H. Tong
University of Illinois Urbana-Champaign