Post-Training Fairness Control: A Single-Train Framework for Dynamic Fairness in Recommendation

📅 2026-01-28
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Cofair, a framework that addresses the inflexibility of existing fairness-aware recommendation systems, which typically hardcode fairness constraints during training and cannot adapt to diverse post-hoc fairness requirements. Cofair employs a shared representation layer coupled with fairness-conditioned adapters, enabling dynamic adjustment of recommendation fairness levels after a single training run. It further introduces a user-level monotonicity regularizer to guarantee that fairness strictly improves as the target fairness constraint becomes more stringent. Theoretical analysis demonstrates that its adversarial training mechanism effectively bounds demographic parity violation. Extensive experiments show that Cofair is compatible with various backbone models and achieves competitive or superior fairness-accuracy trade-offs across multiple datasets, while eliminating the need for repeated training under different fairness objectives.

Technology Category

Application Category

📝 Abstract
Despite growing efforts to mitigate unfairness in recommender systems, existing fairness-aware methods typically fix the fairness requirement at training time and provide limited post-training flexibility. However, in real-world scenarios, diverse stakeholders may demand differing fairness requirements over time, so retraining for different fairness requirements becomes prohibitive. To address this limitation, we propose Cofair, a single-train framework that enables post-training fairness control in recommendation. Specifically, Cofair introduces a shared representation layer with fairness-conditioned adapter modules to produce user embeddings specialized for varied fairness levels, along with a user-level regularization term that guarantees user-wise monotonic fairness improvements across these levels. We theoretically establish that the adversarial objective of Cofair upper bounds demographic parity and the regularization term enforces progressive fairness at user level. Comprehensive experiments on multiple datasets and backbone models demonstrate that our framework provides dynamic fairness at different levels, delivering comparable or better fairness-accuracy curves than state-of-the-art baselines, without the need to retrain for each new fairness requirement. Our code is publicly available at https://github.com/weixinchen98/Cofair.
Problem

Research questions and friction points this paper is trying to address.

fairness control
post-training
recommender systems
dynamic fairness
fairness requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

post-training fairness control
fairness-conditioned adapter
dynamic fairness
user-level regularization
single-train framework
🔎 Similar Papers
No similar papers found.