Mitigating topology biases in Graph Diffusion via Counterfactual Intervention

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue that graph diffusion models often inherit and amplify topological biases associated with sensitive attributes—such as gender, age, or region—leading to unfair generation outcomes. To mitigate this, we propose FairGDiff, the first method to directly integrate counterfactual intervention into the graph diffusion process. By constructing a causal model that answers the question “Would the graph structure change if the sensitive attribute were different?”, FairGDiff applies unbiased interventions during both forward and reverse diffusion stages, enabling fair graph generation while preserving structural utility. Notably, our approach operates without requiring complete labels or simultaneous updates of graph structure and node attributes, making it applicable to general graph diffusion settings. Experiments on multiple real-world datasets demonstrate that FairGDiff significantly outperforms existing methods, achieving a superior trade-off between fairness and generative utility, along with strong scalability.

Technology Category

Application Category

📝 Abstract
Graph diffusion models have gained significant attention in graph generation tasks, but they often inherit and amplify topology biases from sensitive attributes (e.g. gender, age, region), leading to unfair synthetic graphs. Existing fair graph generation using diffusion models is limited to specific graph-based applications with complete labels or requires simultaneous updates for graph structure and node attributes, making them unsuitable for general usage. To relax these limitations by applying the debiasing method directly on graph topology, we propose Fair Graph Diffusion Model (FairGDiff), a counterfactual-based one-step solution that mitigates topology biases while balancing fairness and utility. In detail, we construct a causal model to capture the relationship between sensitive attributes, biased link formation, and the generated graph structure. By answering the counterfactual question "Would the graph structure change if the sensitive attribute were different?", we estimate an unbiased treatment and incorporate it into the diffusion process. FairGDiff integrates counterfactual learning into both forward diffusion and backward denoising, ensuring that the generated graphs are independent of sensitive attributes while preserving structural integrity. Extensive experiments on real-world datasets demonstrate that FairGDiff achieves a superior trade-off between fairness and utility, outperforming existing fair graph generation methods while maintaining scalability.
Problem

Research questions and friction points this paper is trying to address.

topology bias
graph diffusion
fairness
sensitive attributes
counterfactual intervention
Innovation

Methods, ideas, or system contributions that make the work stand out.

counterfactual intervention
graph diffusion
fair graph generation
topology bias mitigation
causal modeling
🔎 Similar Papers
No similar papers found.