🤖 AI Summary
Existing fairness evaluation for link prediction relies on binary demographic parity, which fails to expose subtle biases across subgroups and is ill-suited for ranking-oriented tasks. This paradigm obscures structural inequities in graph-structured data. To address these limitations, we propose a non-binary fairness analysis framework: (1) a decoupled link predictor that separates representation learning from fairness-aware optimization; (2) a lightweight post-processing module tailored for ranked outputs; and (3) a more expressive, subgroup-level fairness metric suite. Evaluated on multiple benchmark graph datasets, our approach significantly reduces subgroup-level bias while preserving predictive utility—achieving the optimal fairness–utility trade-off. The framework establishes a new paradigm for fine-grained fairness assessment in graph learning, enabling nuanced diagnosis of structural disparities beyond coarse demographic categories.
📝 Abstract
Link prediction is a fundamental task in graph machine learning with applications, ranging from social recommendation to knowledge graph completion. Fairness in this setting is critical, as biased predictions can exacerbate societal inequalities. Prior work adopts a dyadic definition of fairness, enforcing fairness through demographic parity between intra-group and inter-group link predictions. However, we show that this dyadic framing can obscure underlying disparities across subgroups, allowing systemic biases to go undetected. Moreover, we argue that demographic parity does not meet desired properties for fairness assessment in ranking-based tasks such as link prediction. We formalize the limitations of existing fairness evaluations and propose a framework that enables a more expressive assessment. Additionally, we propose a lightweight post-processing method combined with decoupled link predictors that effectively mitigates bias and achieves state-of-the-art fairness-utility trade-offs.