Breaking the Dyadic Barrier: Rethinking Fairness in Link Prediction Beyond Demographic Parity

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fairness evaluation for link prediction relies on binary demographic parity, which fails to expose subtle biases across subgroups and is ill-suited for ranking-oriented tasks. This paradigm obscures structural inequities in graph-structured data. To address these limitations, we propose a non-binary fairness analysis framework: (1) a decoupled link predictor that separates representation learning from fairness-aware optimization; (2) a lightweight post-processing module tailored for ranked outputs; and (3) a more expressive, subgroup-level fairness metric suite. Evaluated on multiple benchmark graph datasets, our approach significantly reduces subgroup-level bias while preserving predictive utility—achieving the optimal fairness–utility trade-off. The framework establishes a new paradigm for fine-grained fairness assessment in graph learning, enabling nuanced diagnosis of structural disparities beyond coarse demographic categories.

Technology Category

Application Category

📝 Abstract
Link prediction is a fundamental task in graph machine learning with applications, ranging from social recommendation to knowledge graph completion. Fairness in this setting is critical, as biased predictions can exacerbate societal inequalities. Prior work adopts a dyadic definition of fairness, enforcing fairness through demographic parity between intra-group and inter-group link predictions. However, we show that this dyadic framing can obscure underlying disparities across subgroups, allowing systemic biases to go undetected. Moreover, we argue that demographic parity does not meet desired properties for fairness assessment in ranking-based tasks such as link prediction. We formalize the limitations of existing fairness evaluations and propose a framework that enables a more expressive assessment. Additionally, we propose a lightweight post-processing method combined with decoupled link predictors that effectively mitigates bias and achieves state-of-the-art fairness-utility trade-offs.
Problem

Research questions and friction points this paper is trying to address.

Challenges dyadic fairness definitions in link prediction tasks
Identifies limitations of demographic parity for ranking-based fairness assessment
Proposes expressive evaluation framework and bias mitigation method
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes expressive fairness assessment framework beyond dyadic definitions
Introduces lightweight post-processing method for bias mitigation
Combines decoupled link predictors with fairness optimization
🔎 Similar Papers
No similar papers found.
J
João Mattos
Computer Science Department, Rice University
D
Debolina Halder Lina
Computer Science Department, Rice University
Arlei Silva
Arlei Silva
Rice University
Data MiningAlgorithmsMachine LearningData ScienceNetwork Science