Enhancing Fairness in Unsupervised Graph Anomaly Detection through Disentanglement

📅 2024-06-03
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing unsupervised graph anomaly detection methods neglect fairness, leading to discriminatory decisions biased by sensitive attributes (e.g., gender, race). To address this, we propose DEFEND, the first framework that jointly optimizes detection utility and fairness. DEFEND employs representation disentanglement—novel in unsupervised graph anomaly detection—to explicitly separate and suppress sensitive-attribute bias from node representations. It further introduces an attribute-reconstruction-based anomaly scoring mechanism, coupled with a correlation constraint that penalizes residual dependence between reconstruction error and sensitive attributes. Evaluated on multiple real-world graph datasets, DEFEND achieves an average 3.2% AUC improvement over state-of-the-art methods while reducing Equalized Odds disparity by 57%, demonstrating substantial gains in both detection performance and group fairness.

Technology Category

Application Category

📝 Abstract
Graph anomaly detection (GAD) is increasingly crucial in various applications, ranging from financial fraud detection to fake news detection. However, current GAD methods largely overlook the fairness problem, which might result in discriminatory decisions skewed toward certain demographic groups defined on sensitive attributes (e.g., gender, religion, ethnicity, etc.). This greatly limits the applicability of these methods in real-world scenarios in light of societal and ethical restrictions. To address this critical gap, we make the first attempt to integrate fairness with utility in GAD decision-making. Specifically, we devise a novel DisEntangle-based FairnEss-aware aNomaly Detection framework on the attributed graph, named DEFEND. DEFEND first introduces disentanglement in GNNs to capture informative yet sensitive-irrelevant node representations, effectively reducing societal bias inherent in graph representation learning. Besides, to alleviate discriminatory bias in evaluating anomalous nodes, DEFEND adopts a reconstruction-based anomaly detection, which concentrates solely on node attributes without incorporating any graph structure. Additionally, given the inherent association between input and sensitive attributes, DEFEND constrains the correlation between the reconstruction error and the predicted sensitive attributes. Our empirical evaluations on real-world datasets reveal that DEFEND performs effectively in GAD and significantly enhances fairness compared to state-of-the-art baselines. To foster reproducibility, our code is available at https://github.com/AhaChang/DEFEND.
Problem

Research questions and friction points this paper is trying to address.

Address fairness in unsupervised graph anomaly detection.
Reduce societal bias in graph representation learning.
Enhance fairness and utility in anomaly detection decisions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentanglement in GNNs for bias reduction
Reconstruction-based anomaly detection on attributes
Constraint on error-sensitive attribute correlation