The Double-Edged Sword of Knowledge Transfer: Diagnosing and Curing Fairness Pathologies in Cross-Domain Recommendation

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the underexplored issue of group-level unfairness in cross-domain recommendation, where knowledge transfer can propagate biases from the source domain and lead to inequitable allocation of cross-domain information gains. The paper systematically identifies and formally characterizes these two fairness pathologies for the first time. To mitigate them, the authors propose the Cross-Domain Fairness Augmentation (CDFA) framework, which integrates a dual-component intervention: adaptive fusion of unlabeled data to alleviate bias propagation, and an information-theoretic model to redistribute informational benefits equitably across user groups. Extensive experiments on multiple real-world datasets demonstrate that CDFA significantly reduces unfairness while maintaining or even improving recommendation performance, effectively balancing fairness and efficiency.

Technology Category

Application Category

📝 Abstract
Cross-domain recommendation (CDR) offers an effective strategy for improving recommendation quality in a target domain by leveraging auxiliary signals from source domains. Nonetheless, emerging evidence shows that CDR can inadvertently heighten group-level unfairness. In this work, we conduct a comprehensive theoretical and empirical analysis to uncover why these fairness issues arise. Specifically, we identify two key challenges: (i) Cross-Domain Disparity Transfer, wherein existing group-level disparities in the source domain are systematically propagated to the target domain; and (ii) Unfairness from Cross-Domain Information Gain, where the benefits derived from cross-domain knowledge are unevenly allocated among distinct groups. To address these two challenges, we propose a Cross-Domain Fairness Augmentation (CDFA) framework composed of two key components. Firstly, it mitigates cross-domain disparity transfer by adaptively integrating unlabeled data to equilibrate the informativeness of training signals across groups. Secondly, it redistributes cross-domain information gains via an information-theoretic approach to ensure equitable benefit allocation across groups. Extensive experiments on multiple datasets and baselines demonstrate that our framework significantly reduces unfairness in CDR without sacrificing overall recommendation performance, while even enhancing it.
Problem

Research questions and friction points this paper is trying to address.

cross-domain recommendation
fairness
disparity transfer
information gain
group-level unfairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Domain Recommendation
Fairness in AI
Disparity Transfer
Information-Theoretic Fairness
Unlabeled Data Augmentation
🔎 Similar Papers
No similar papers found.