๐ค AI Summary
This work addresses a critical limitation in existing ranking fairness approaches, which predominantly rely on exposure position while neglecting contextual factors such as time that significantly influence content providersโ actual revenue. To bridge this gap, the paper introduces the novel concept of โrevenue fairness,โ explicitly modeling the nonlinear relationship between time and income. It proposes a Dynamic Income Derivative-aware Ranking Framework (DIDRF) that dynamically adjusts ranking strategies based on marginal revenue gradients. By integrating Taylor expansion approximations within an online-offline joint optimization framework, DIDRF effectively captures temporal dynamics in revenue generation. Extensive experiments demonstrate that DIDRF consistently outperforms state-of-the-art methods across diverse time-revenue functions, achieving substantial improvements in both revenue fairness and overall ranking performance in both offline and online evaluations.
๐ Abstract
Ranking is central to information distribution in web search and recommendation. Nowadays, in ranking optimization, the fairness to item providers is viewed as a crucial factor alongside ranking relevance for users. There are currently numerous concepts of fairness and one widely recognized fairness concept is Exposure Fairness. However, it relies primarily on exposure determined solely by position, overlooking other factors that significantly influence income, such as time. To address this limitation, we propose to study ranking fairness when the provider utility is influenced by other contextual factors and is neither equal to nor proportional to item exposure. We give a formal definition of Income Fairness and develop a corresponding measurement metric. Simulated experiments show that existing-exposure-fairness-based ranking algorithms fail to optimize the proposed income fairness. Therefore, we propose the Dynamic-Income-Derivative-aware Ranking Fairness algorithm, which, based on the marginal income gain at the present timestep, uses Taylor-expansion-based gradients to simultaneously optimize effectiveness and income fairness. In both offline and online settings with diverse time-income functions, DIDRF consistently outperforms state-of-the-art methods.