Adaptive Dual Prompting: Hierarchical Debiasing for Fairness-aware Graph Neural Networks

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GNN pretraining methods often yield discriminatory representations across demographic subgroups due to inherent attribute and structural biases in graph data. To address this, we propose ADPrompt, an adaptive dual-prompt framework that jointly enhances fairness and utility through hierarchical debiasing. At the input layer, a feature correction module suppresses bias induced by sensitive attributes; at the hidden layer, a structure-aware message calibration module mitigates topological bias. Both modules employ learnable prompt mechanisms for dynamic, fine-grained adaptation and are jointly optimized with self-supervised pretraining. Evaluated on four benchmark datasets under four pretraining paradigms, ADPrompt consistently outperforms seven state-of-the-art baselines on node classification—achieving superior accuracy while simultaneously improving multiple fairness metrics (e.g., demographic parity, equalized odds).

Technology Category

Application Category

📝 Abstract
In recent years, pre-training Graph Neural Networks (GNNs) through self-supervised learning on unlabeled graph data has emerged as a widely adopted paradigm in graph learning. Although the paradigm is effective for pre-training powerful GNN models, the objective gap often exists between pre-training and downstream tasks. To bridge this gap, graph prompting adapts pre-trained GNN models to specific downstream tasks with extra learnable prompts while keeping the pre-trained GNN models frozen. As recent graph prompting methods largely focus on enhancing model utility on downstream tasks, they often overlook fairness concerns when designing prompts for adaptation. In fact, pre-trained GNN models will produce discriminative node representations across demographic subgroups, as downstream graph data inherently contains biases in both node attributes and graph structures. To address this issue, we propose an Adaptive Dual Prompting (ADPrompt) framework that enhances fairness for adapting pre-trained GNN models to downstream tasks. To mitigate attribute bias, we design an Adaptive Feature Rectification module that learns customized attribute prompts to suppress sensitive information at the input layer, reducing bias at the source. Afterward, we propose an Adaptive Message Calibration module that generates structure prompts at each layer, which adjust the message from neighboring nodes to enable dynamic and soft calibration of the information flow. Finally, ADPrompt jointly optimizes the two prompting modules to adapt the pre-trained GNN while enhancing fairness. We conduct extensive experiments on four datasets with four pre-training strategies to evaluate the performance of ADPrompt. The results demonstrate that our proposed ADPrompt outperforms seven baseline methods on node classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Bridging objective gaps between pre-training and downstream graph tasks
Mitigating fairness concerns in graph neural network adaptation
Reducing demographic bias in node representations through dual prompting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Feature Rectification module suppresses sensitive attributes
Adaptive Message Calibration adjusts neighbor messages per layer
Joint optimization of dual prompts enhances fairness adaptation
🔎 Similar Papers
No similar papers found.