Enhancing Fairness in Autoencoders for Node-Level Graph Anomaly Detection

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph anomaly detection (GAD) models based on autoencoders often inherit and amplify biases from sensitive attributes present in training data, leading to unfair node-level decisions. Method: We propose DECAF-GAD, a fairness-aware framework that (i) constructs a structural causal model to disentangle sensitive attributes from graph structure and node feature representations; (ii) designs a fairness-guided loss function and a dedicated disentangled autoencoder architecture; and (iii) integrates adversarial training with counterfactual regularization. Contribution/Results: To our knowledge, DECAF-GAD is the first method to systematically enhance fairness in node-level GAD under the autoencoder paradigm. Experiments on multiple synthetic and real-world graph datasets demonstrate that DECAF-GAD achieves state-of-the-art anomaly detection performance while significantly outperforming existing baselines in fairness—improving average fairness metrics by 23.6%—thereby enabling joint optimization of detection accuracy and fairness.

Technology Category

Application Category

📝 Abstract
Graph anomaly detection (GAD) has become an increasingly important task across various domains. With the rapid development of graph neural networks (GNNs), GAD methods have achieved significant performance improvements. However, fairness considerations in GAD remain largely underexplored. Indeed, GNN-based GAD models can inherit and amplify biases present in training data, potentially leading to unfair outcomes. While existing efforts have focused on developing fair GNNs, most approaches target node classification tasks, where models often rely on simple layer architectures rather than autoencoder-based structures, which are the most widely used architecturs for anomaly detection. To address fairness in autoencoder-based GAD models, we propose extbf{D}is extbf{E}ntangled extbf{C}ounterfactual extbf{A}dversarial extbf{F}air (DECAF)-GAD, a framework that alleviates bias while preserving GAD performance. Specifically, we introduce a structural causal model (SCM) to disentangle sensitive attributes from learned representations. Based on this causal framework, we formulate a specialized autoencoder architecture along with a fairness-guided loss function. Through extensive experiments on both synthetic and real-world datasets, we demonstrate that DECAF-GAD not only achieves competitive anomaly detection performance but also significantly enhances fairness metrics compared to baseline GAD methods. Our code is available at https://github.com/Tlhey/decaf_code.
Problem

Research questions and friction points this paper is trying to address.

Addressing fairness in autoencoder-based graph anomaly detection
Mitigating bias amplification in GNN models for node-level tasks
Disentangling sensitive attributes from representations to ensure equitable outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangles sensitive attributes using structural causal model
Specialized autoencoder architecture with fairness-guided loss
Counterfactual adversarial framework for bias reduction